Test Report: Docker_Linux_crio_arm64 21764

                    
                      d8ceda1a406080ee928dec4912f2c0ffeefd6083:2025-10-18:41957
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.68
35 TestAddons/parallel/Registry 17.07
36 TestAddons/parallel/RegistryCreds 0.67
37 TestAddons/parallel/Ingress 145.59
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 6.48
41 TestAddons/parallel/CSI 44.9
42 TestAddons/parallel/Headlamp 3.42
43 TestAddons/parallel/CloudSpanner 5.27
44 TestAddons/parallel/LocalPath 8.53
45 TestAddons/parallel/NvidiaDevicePlugin 6.28
46 TestAddons/parallel/Yakd 6.31
98 TestFunctional/parallel/ServiceCmdConnect 603.5
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.93
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
136 TestFunctional/parallel/ServiceCmd/Format 0.5
137 TestFunctional/parallel/ServiceCmd/URL 0.49
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.14
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.35
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.26
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
191 TestJSONOutput/pause/Command 1.73
197 TestJSONOutput/unpause/Command 2.22
250 TestScheduledStopUnix 40.32
271 TestPause/serial/Pause 9.49
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.5
303 TestStartStop/group/old-k8s-version/serial/Pause 9.3
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.49
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.44
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.11
327 TestStartStop/group/embed-certs/serial/Pause 8.89
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.52
339 TestStartStop/group/newest-cni/serial/Pause 7.45
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.05
348 TestStartStop/group/no-preload/serial/Pause 6.65
x
+
TestAddons/serial/Volcano (0.68s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable volcano --alsologtostderr -v=1: exit status 11 (683.299381ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:33:25.093671  301933 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:25.094598  301933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:25.094640  301933 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:25.094659  301933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:25.094949  301933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:33:25.095298  301933 mustload.go:65] Loading cluster: addons-006674
	I1018 09:33:25.095706  301933 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:25.095749  301933 addons.go:606] checking whether the cluster is paused
	I1018 09:33:25.095881  301933 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:25.095923  301933 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:33:25.096399  301933 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:33:25.122538  301933 ssh_runner.go:195] Run: systemctl --version
	I1018 09:33:25.122621  301933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:33:25.141266  301933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:33:25.247885  301933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:33:25.247974  301933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:33:25.283365  301933 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:33:25.283389  301933 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:33:25.283396  301933 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:33:25.283400  301933 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:33:25.283403  301933 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:33:25.283407  301933 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:33:25.283410  301933 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:33:25.283414  301933 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:33:25.283417  301933 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:33:25.283425  301933 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:33:25.283429  301933 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:33:25.283432  301933 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:33:25.283436  301933 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:33:25.283439  301933 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:33:25.283443  301933 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:33:25.283448  301933 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:33:25.283452  301933 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:33:25.283457  301933 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:33:25.283460  301933 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:33:25.283464  301933 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:33:25.283469  301933 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:33:25.283472  301933 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:33:25.283475  301933 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:33:25.283479  301933 cri.go:89] found id: ""
	I1018 09:33:25.283534  301933 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:33:25.300616  301933 out.go:203] 
	W1018 09:33:25.303624  301933 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:33:25.303647  301933 out.go:285] * 
	* 
	W1018 09:33:25.689127  301933 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:33:25.692068  301933 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.68s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.287918ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-flkkz" [cd38f302-4660-4066-897a-e2246722c55f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002902646s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-46rp2" [99a95a84-cd1c-42d2-b8a8-a0bb70a90f31] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003101609s
addons_test.go:392: (dbg) Run:  kubectl --context addons-006674 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-006674 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-006674 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.395364164s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 ip
2025/10/18 09:33:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable registry --alsologtostderr -v=1: exit status 11 (351.845511ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:33:52.744341  302515 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:52.745276  302515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:52.745319  302515 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:52.745338  302515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:52.745625  302515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:33:52.745979  302515 mustload.go:65] Loading cluster: addons-006674
	I1018 09:33:52.746463  302515 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:52.746513  302515 addons.go:606] checking whether the cluster is paused
	I1018 09:33:52.746670  302515 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:52.746720  302515 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:33:52.747222  302515 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:33:52.773366  302515 ssh_runner.go:195] Run: systemctl --version
	I1018 09:33:52.773421  302515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:33:52.803733  302515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:33:52.955724  302515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:33:52.955864  302515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:33:53.002488  302515 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:33:53.002550  302515 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:33:53.002570  302515 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:33:53.002589  302515 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:33:53.002607  302515 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:33:53.002649  302515 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:33:53.002670  302515 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:33:53.002694  302515 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:33:53.002720  302515 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:33:53.002740  302515 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:33:53.002760  302515 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:33:53.002778  302515 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:33:53.002799  302515 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:33:53.002816  302515 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:33:53.002834  302515 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:33:53.002853  302515 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:33:53.002879  302515 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:33:53.002897  302515 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:33:53.002924  302515 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:33:53.002942  302515 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:33:53.002964  302515 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:33:53.002983  302515 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:33:53.003001  302515 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:33:53.003019  302515 cri.go:89] found id: ""
	I1018 09:33:53.003089  302515 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:33:53.025336  302515 out.go:203] 
	W1018 09:33:53.028449  302515 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:33:53.028473  302515 out.go:285] * 
	* 
	W1018 09:33:53.034956  302515 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:33:53.039032  302515 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (17.07s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.72683ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-006674
addons_test.go:332: (dbg) Run:  kubectl --context addons-006674 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (317.984602ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:29.711735  304230 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:29.713380  304230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:29.713444  304230 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:29.713466  304230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:29.713825  304230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:34:29.714167  304230 mustload.go:65] Loading cluster: addons-006674
	I1018 09:34:29.714588  304230 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:29.714634  304230 addons.go:606] checking whether the cluster is paused
	I1018 09:34:29.714764  304230 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:29.714807  304230 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:34:29.715449  304230 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:34:29.739209  304230 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:29.739285  304230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:34:29.768269  304230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:34:29.892912  304230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:29.893022  304230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:29.927558  304230 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:34:29.927581  304230 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:34:29.927586  304230 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:34:29.927599  304230 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:34:29.927603  304230 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:34:29.927606  304230 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:34:29.927611  304230 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:34:29.927614  304230 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:34:29.927618  304230 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:34:29.927624  304230 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:34:29.927631  304230 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:34:29.927635  304230 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:34:29.927638  304230 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:34:29.927641  304230 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:34:29.927644  304230 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:34:29.927649  304230 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:34:29.927659  304230 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:34:29.927662  304230 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:34:29.927665  304230 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:34:29.927669  304230 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:34:29.927673  304230 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:34:29.927676  304230 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:34:29.927679  304230 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:34:29.927682  304230 cri.go:89] found id: ""
	I1018 09:34:29.927739  304230 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:29.948168  304230 out.go:203] 
	W1018 09:34:29.952425  304230 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:29.952503  304230 out.go:285] * 
	* 
	W1018 09:34:29.958962  304230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:29.962699  304230 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-006674 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-006674 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-006674 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [975293a2-5fff-4701-b70f-3b75302df5c0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [975293a2-5fff-4701-b70f-3b75302df5c0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003718088s
I1018 09:34:37.819023  295193 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.791647168s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-006674 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-006674
helpers_test.go:243: (dbg) docker inspect addons-006674:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c",
	        "Created": "2025-10-18T09:30:55.363190236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296351,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:30:55.429094391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/hosts",
	        "LogPath": "/var/lib/docker/containers/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c-json.log",
	        "Name": "/addons-006674",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-006674:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-006674",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c",
	                "LowerDir": "/var/lib/docker/overlay2/29d18a6b1974dba062c4a5a3e8cc7328dbfa0c44e2c9c6fd83c6843a9a7db9fb-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29d18a6b1974dba062c4a5a3e8cc7328dbfa0c44e2c9c6fd83c6843a9a7db9fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29d18a6b1974dba062c4a5a3e8cc7328dbfa0c44e2c9c6fd83c6843a9a7db9fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29d18a6b1974dba062c4a5a3e8cc7328dbfa0c44e2c9c6fd83c6843a9a7db9fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-006674",
	                "Source": "/var/lib/docker/volumes/addons-006674/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-006674",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-006674",
	                "name.minikube.sigs.k8s.io": "addons-006674",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3bab0fa0bf04f2f8254ca852eaeed22fb804d3deb9d1901bb25f9bf177d20b8b",
	            "SandboxKey": "/var/run/docker/netns/3bab0fa0bf04",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-006674": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:cd:c9:47:b3:08",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fb4a9eff6b0ac61b7987fb82f64010d024279717261b1f1f792e101a365c1e6d",
	                    "EndpointID": "ceb2cb30ade6094597b8eab5237c686483adfa9ab74715d58ec0b51eb3192d35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-006674",
	                        "2a58daa84df6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-006674 -n addons-006674
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-006674 logs -n 25: (1.438441474s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-724083                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-724083 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ --download-only -p binary-mirror-816488 --alsologtostderr --binary-mirror http://127.0.0.1:39529 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-816488   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ delete  │ -p binary-mirror-816488                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-816488   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ addons  │ disable dashboard -p addons-006674                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-006674                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p addons-006674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ addons-006674 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ addons  │ addons-006674 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ addons  │ addons-006674 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ ip      │ addons-006674 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ addons-006674 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ addons  │ addons-006674 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ ssh     │ addons-006674 ssh cat /opt/local-path-provisioner/pvc-a1742402-0986-435b-8326-e21304879a9e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ addons  │ addons-006674 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-006674 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ enable headlamp -p addons-006674 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-006674 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-006674 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-006674 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-006674 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-006674 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-006674                                                                                                                                                                                                                                                                                                                                                                                           │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ addons  │ addons-006674 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ ssh     │ addons-006674 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ ip      │ addons-006674 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:36 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:30:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:30:30.161475  295952 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:30:30.161683  295952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:30.161716  295952 out.go:374] Setting ErrFile to fd 2...
	I1018 09:30:30.161735  295952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:30.162058  295952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:30:30.162593  295952 out.go:368] Setting JSON to false
	I1018 09:30:30.163634  295952 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4381,"bootTime":1760775450,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 09:30:30.163759  295952 start.go:141] virtualization:  
	I1018 09:30:30.167275  295952 out.go:179] * [addons-006674] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:30:30.171150  295952 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:30:30.171271  295952 notify.go:220] Checking for updates...
	I1018 09:30:30.177307  295952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:30:30.180432  295952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:30:30.183450  295952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 09:30:30.186578  295952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:30:30.190445  295952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:30:30.193870  295952 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:30:30.228010  295952 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:30:30.228149  295952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:30.285876  295952 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 09:30:30.276115025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:30.285984  295952 docker.go:318] overlay module found
	I1018 09:30:30.289128  295952 out.go:179] * Using the docker driver based on user configuration
	I1018 09:30:30.291965  295952 start.go:305] selected driver: docker
	I1018 09:30:30.291984  295952 start.go:925] validating driver "docker" against <nil>
	I1018 09:30:30.291998  295952 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:30:30.292741  295952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:30.348656  295952 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 09:30:30.339892562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:30.348821  295952 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:30:30.349052  295952 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:30:30.352065  295952 out.go:179] * Using Docker driver with root privileges
	I1018 09:30:30.354939  295952 cni.go:84] Creating CNI manager for ""
	I1018 09:30:30.355041  295952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:30:30.355058  295952 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:30:30.355138  295952 start.go:349] cluster config:
	{Name:addons-006674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 09:30:30.360112  295952 out.go:179] * Starting "addons-006674" primary control-plane node in "addons-006674" cluster
	I1018 09:30:30.362922  295952 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:30:30.365778  295952 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:30:30.368569  295952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:30:30.368622  295952 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:30:30.368634  295952 cache.go:58] Caching tarball of preloaded images
	I1018 09:30:30.368667  295952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:30:30.368728  295952 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:30:30.368738  295952 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:30:30.369096  295952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/config.json ...
	I1018 09:30:30.369128  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/config.json: {Name:mka65e5b9d37d2e4b2c1304e163a9cf934b6d64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:30:30.384435  295952 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 09:30:30.384582  295952 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 09:30:30.384602  295952 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 09:30:30.384607  295952 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 09:30:30.384615  295952 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 09:30:30.384620  295952 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 09:30:48.369494  295952 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 09:30:48.369536  295952 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:30:48.369566  295952 start.go:360] acquireMachinesLock for addons-006674: {Name:mk7e4142b1387a9d5103c52b0dd86664f3e789c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:48.369692  295952 start.go:364] duration metric: took 104.643µs to acquireMachinesLock for "addons-006674"
	I1018 09:30:48.369725  295952 start.go:93] Provisioning new machine with config: &{Name:addons-006674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:30:48.369810  295952 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:30:48.373316  295952 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 09:30:48.373563  295952 start.go:159] libmachine.API.Create for "addons-006674" (driver="docker")
	I1018 09:30:48.373608  295952 client.go:168] LocalClient.Create starting
	I1018 09:30:48.373742  295952 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 09:30:48.445303  295952 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 09:30:48.676629  295952 cli_runner.go:164] Run: docker network inspect addons-006674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:30:48.691352  295952 cli_runner.go:211] docker network inspect addons-006674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:30:48.691443  295952 network_create.go:284] running [docker network inspect addons-006674] to gather additional debugging logs...
	I1018 09:30:48.691461  295952 cli_runner.go:164] Run: docker network inspect addons-006674
	W1018 09:30:48.706405  295952 cli_runner.go:211] docker network inspect addons-006674 returned with exit code 1
	I1018 09:30:48.706439  295952 network_create.go:287] error running [docker network inspect addons-006674]: docker network inspect addons-006674: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-006674 not found
	I1018 09:30:48.706451  295952 network_create.go:289] output of [docker network inspect addons-006674]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-006674 not found
	
	** /stderr **
	I1018 09:30:48.706614  295952 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:30:48.722310  295952 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ace720}
	I1018 09:30:48.722358  295952 network_create.go:124] attempt to create docker network addons-006674 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 09:30:48.722414  295952 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-006674 addons-006674
	I1018 09:30:48.779724  295952 network_create.go:108] docker network addons-006674 192.168.49.0/24 created
	I1018 09:30:48.779775  295952 kic.go:121] calculated static IP "192.168.49.2" for the "addons-006674" container
	I1018 09:30:48.779858  295952 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:30:48.795098  295952 cli_runner.go:164] Run: docker volume create addons-006674 --label name.minikube.sigs.k8s.io=addons-006674 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:30:48.812727  295952 oci.go:103] Successfully created a docker volume addons-006674
	I1018 09:30:48.812820  295952 cli_runner.go:164] Run: docker run --rm --name addons-006674-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006674 --entrypoint /usr/bin/test -v addons-006674:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:30:50.926673  295952 cli_runner.go:217] Completed: docker run --rm --name addons-006674-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006674 --entrypoint /usr/bin/test -v addons-006674:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.113817769s)
	I1018 09:30:50.926704  295952 oci.go:107] Successfully prepared a docker volume addons-006674
	I1018 09:30:50.926733  295952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:30:50.926751  295952 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:30:50.926828  295952 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:30:55.295484  295952 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.368614862s)
	I1018 09:30:55.295531  295952 kic.go:203] duration metric: took 4.368760828s to extract preloaded images to volume ...
	W1018 09:30:55.295667  295952 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:30:55.295775  295952 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:30:55.348708  295952 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-006674 --name addons-006674 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006674 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-006674 --network addons-006674 --ip 192.168.49.2 --volume addons-006674:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:30:55.637823  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Running}}
	I1018 09:30:55.663276  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:30:55.687847  295952 cli_runner.go:164] Run: docker exec addons-006674 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:30:55.740533  295952 oci.go:144] the created container "addons-006674" has a running status.
	I1018 09:30:55.740566  295952 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa...
	I1018 09:30:55.986557  295952 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:30:56.015011  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:30:56.036269  295952 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:30:56.036294  295952 kic_runner.go:114] Args: [docker exec --privileged addons-006674 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:30:56.099946  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:30:56.124529  295952 machine.go:93] provisionDockerMachine start ...
	I1018 09:30:56.124639  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:30:56.151345  295952 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:56.151674  295952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1018 09:30:56.151683  295952 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:30:56.152375  295952 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 09:30:59.296950  295952 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006674
	
	I1018 09:30:59.296976  295952 ubuntu.go:182] provisioning hostname "addons-006674"
	I1018 09:30:59.297049  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:30:59.314272  295952 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:59.314588  295952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1018 09:30:59.314607  295952 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-006674 && echo "addons-006674" | sudo tee /etc/hostname
	I1018 09:30:59.470620  295952 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006674
	
	I1018 09:30:59.470780  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:30:59.488321  295952 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:59.488643  295952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1018 09:30:59.488660  295952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-006674' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-006674/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-006674' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:30:59.637324  295952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:30:59.637350  295952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 09:30:59.637384  295952 ubuntu.go:190] setting up certificates
	I1018 09:30:59.637395  295952 provision.go:84] configureAuth start
	I1018 09:30:59.637463  295952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006674
	I1018 09:30:59.654336  295952 provision.go:143] copyHostCerts
	I1018 09:30:59.654422  295952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 09:30:59.654565  295952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 09:30:59.654631  295952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 09:30:59.654682  295952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.addons-006674 san=[127.0.0.1 192.168.49.2 addons-006674 localhost minikube]
	I1018 09:30:59.992451  295952 provision.go:177] copyRemoteCerts
	I1018 09:30:59.992514  295952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:30:59.992553  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.009445  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:00.189228  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:31:00.239758  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:31:00.299436  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 09:31:00.366058  295952 provision.go:87] duration metric: took 728.628117ms to configureAuth
	I1018 09:31:00.366155  295952 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:31:00.366402  295952 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:00.366570  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.397451  295952 main.go:141] libmachine: Using SSH client type: native
	I1018 09:31:00.397794  295952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1018 09:31:00.397820  295952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:31:00.678380  295952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:31:00.678404  295952 machine.go:96] duration metric: took 4.553849608s to provisionDockerMachine
	I1018 09:31:00.678415  295952 client.go:171] duration metric: took 12.304796866s to LocalClient.Create
	I1018 09:31:00.678428  295952 start.go:167] duration metric: took 12.304867776s to libmachine.API.Create "addons-006674"
	I1018 09:31:00.678435  295952 start.go:293] postStartSetup for "addons-006674" (driver="docker")
	I1018 09:31:00.678444  295952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:31:00.678521  295952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:31:00.678570  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.699319  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:00.805061  295952 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:31:00.808568  295952 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:31:00.808596  295952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:31:00.808607  295952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 09:31:00.808671  295952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 09:31:00.808698  295952 start.go:296] duration metric: took 130.257837ms for postStartSetup
	I1018 09:31:00.809020  295952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006674
	I1018 09:31:00.826125  295952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/config.json ...
	I1018 09:31:00.826422  295952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:31:00.826483  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.843431  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:00.942079  295952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:31:00.946676  295952 start.go:128] duration metric: took 12.57685091s to createHost
	I1018 09:31:00.946702  295952 start.go:83] releasing machines lock for "addons-006674", held for 12.576995875s
	I1018 09:31:00.946787  295952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006674
	I1018 09:31:00.963814  295952 ssh_runner.go:195] Run: cat /version.json
	I1018 09:31:00.963853  295952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:31:00.963865  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.963916  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.980943  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:01.003233  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:01.195327  295952 ssh_runner.go:195] Run: systemctl --version
	I1018 09:31:01.202260  295952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:31:01.239202  295952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:31:01.243928  295952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:31:01.244057  295952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:31:01.276247  295952 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 09:31:01.276278  295952 start.go:495] detecting cgroup driver to use...
	I1018 09:31:01.276312  295952 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:31:01.276364  295952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:31:01.294943  295952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:31:01.312247  295952 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:31:01.312315  295952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:31:01.331027  295952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:31:01.348988  295952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:31:01.473033  295952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:31:01.600798  295952 docker.go:234] disabling docker service ...
	I1018 09:31:01.600894  295952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:31:01.624104  295952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:31:01.638266  295952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:31:01.752769  295952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:31:01.870635  295952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:31:01.883325  295952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:31:01.897787  295952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:31:01.897864  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.907304  295952 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:31:01.907375  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.917763  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.926619  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.935282  295952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:31:01.943326  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.951916  295952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.965682  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.974197  295952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:31:01.982077  295952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:31:01.989822  295952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:02.112934  295952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:31:02.247871  295952 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:31:02.247974  295952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:31:02.252329  295952 start.go:563] Will wait 60s for crictl version
	I1018 09:31:02.252413  295952 ssh_runner.go:195] Run: which crictl
	I1018 09:31:02.256322  295952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:31:02.280806  295952 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:31:02.280937  295952 ssh_runner.go:195] Run: crio --version
	I1018 09:31:02.309686  295952 ssh_runner.go:195] Run: crio --version
	I1018 09:31:02.343113  295952 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:31:02.346104  295952 cli_runner.go:164] Run: docker network inspect addons-006674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:31:02.362119  295952 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 09:31:02.366006  295952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:31:02.376160  295952 kubeadm.go:883] updating cluster {Name:addons-006674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:31:02.376277  295952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:31:02.376332  295952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:31:02.412027  295952 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:31:02.412053  295952 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:31:02.412120  295952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:31:02.440367  295952 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:31:02.440391  295952 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:31:02.440400  295952 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 09:31:02.440500  295952 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-006674 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:31:02.440586  295952 ssh_runner.go:195] Run: crio config
	I1018 09:31:02.511866  295952 cni.go:84] Creating CNI manager for ""
	I1018 09:31:02.511888  295952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:31:02.511914  295952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:31:02.511950  295952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-006674 NodeName:addons-006674 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:31:02.512103  295952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-006674"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:31:02.512190  295952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:31:02.520582  295952 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:31:02.520691  295952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:31:02.528244  295952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 09:31:02.541597  295952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:31:02.555139  295952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 09:31:02.567720  295952 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:31:02.571391  295952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:31:02.581593  295952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:02.698510  295952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:02.720259  295952 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674 for IP: 192.168.49.2
	I1018 09:31:02.720281  295952 certs.go:195] generating shared ca certs ...
	I1018 09:31:02.720306  295952 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:02.721119  295952 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 09:31:03.292157  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt ...
	I1018 09:31:03.292189  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt: {Name:mk8d3f19ca1aa391bbc70a2b3fb9803197d9d701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.293019  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key ...
	I1018 09:31:03.293037  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key: {Name:mk26ad599c66ddda508ce2717b1cda5e0b8014d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.293711  295952 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 09:31:03.559084  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt ...
	I1018 09:31:03.559116  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt: {Name:mka5245d0f3b42eba9e957f4c851d73149e14243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.559308  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key ...
	I1018 09:31:03.559321  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key: {Name:mkf4ae093b4c402caa2df28ffb84d0806b324996 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.560065  295952 certs.go:257] generating profile certs ...
	I1018 09:31:03.560129  295952 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.key
	I1018 09:31:03.560153  295952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt with IP's: []
	I1018 09:31:03.761420  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt ...
	I1018 09:31:03.761458  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: {Name:mk530e18a5da22bc7097f1e016fc5cc1231fa098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.761628  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.key ...
	I1018 09:31:03.761639  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.key: {Name:mk6ae3baf71e5567c2a52f974428d76f0b7e9b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.762283  295952 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key.48582f27
	I1018 09:31:03.762305  295952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt.48582f27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 09:31:04.390391  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt.48582f27 ...
	I1018 09:31:04.390423  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt.48582f27: {Name:mk9db4316d0425914f78037d41b1d30d1a01500e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:04.390608  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key.48582f27 ...
	I1018 09:31:04.390621  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key.48582f27: {Name:mk606cfb848bcfa9c19ef33f24a655f24829857f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:04.391376  295952 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt.48582f27 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt
	I1018 09:31:04.391461  295952 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key.48582f27 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key
	I1018 09:31:04.391514  295952 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.key
	I1018 09:31:04.391534  295952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.crt with IP's: []
	I1018 09:31:05.360879  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.crt ...
	I1018 09:31:05.360914  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.crt: {Name:mk3fe22b2dd523989b85719c2e72c2db16a11387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:05.361772  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.key ...
	I1018 09:31:05.361792  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.key: {Name:mke0d578aad2bbe61ee4a61be81d2337b12f9750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:05.362025  295952 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:31:05.362070  295952 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:31:05.362101  295952 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:31:05.362131  295952 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 09:31:05.362696  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:31:05.382243  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:31:05.401625  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:31:05.420346  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:31:05.438553  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:31:05.456807  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:31:05.474428  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:31:05.492244  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:31:05.510323  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:31:05.528738  295952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:31:05.541359  295952 ssh_runner.go:195] Run: openssl version
	I1018 09:31:05.547933  295952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:31:05.556706  295952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:05.560549  295952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:05.560615  295952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:05.603241  295952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:31:05.611577  295952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:31:05.615443  295952 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:31:05.615538  295952 kubeadm.go:400] StartCluster: {Name:addons-006674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:31:05.615624  295952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:31:05.615683  295952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:31:05.644637  295952 cri.go:89] found id: ""
	I1018 09:31:05.644711  295952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:31:05.652400  295952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:31:05.660222  295952 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:31:05.660292  295952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:31:05.668012  295952 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:31:05.668036  295952 kubeadm.go:157] found existing configuration files:
	
	I1018 09:31:05.668152  295952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:31:05.676179  295952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:31:05.676244  295952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:31:05.684157  295952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:31:05.692759  295952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:31:05.692878  295952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:31:05.700936  295952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:31:05.709535  295952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:31:05.709688  295952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:31:05.717469  295952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:31:05.726581  295952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:31:05.726703  295952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:31:05.734808  295952 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:31:05.776652  295952 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:31:05.776950  295952 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:31:05.800550  295952 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:31:05.800705  295952 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:31:05.800784  295952 kubeadm.go:318] OS: Linux
	I1018 09:31:05.800882  295952 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:31:05.800967  295952 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:31:05.801067  295952 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:31:05.801152  295952 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:31:05.801272  295952 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:31:05.801375  295952 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:31:05.801452  295952 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:31:05.801533  295952 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:31:05.801614  295952 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:31:05.867193  295952 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:31:05.867376  295952 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:31:05.867518  295952 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:31:05.875548  295952 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:31:05.881568  295952 out.go:252]   - Generating certificates and keys ...
	I1018 09:31:05.881731  295952 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:31:05.881848  295952 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:31:06.305660  295952 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:31:06.696223  295952 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:31:06.899310  295952 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:31:07.937952  295952 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:31:08.872734  295952 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:31:08.873014  295952 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-006674 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 09:31:10.974917  295952 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:31:10.975195  295952 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-006674 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 09:31:11.233222  295952 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:31:11.942054  295952 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:31:12.394529  295952 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:31:12.394813  295952 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:31:12.898310  295952 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:31:12.940250  295952 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:31:13.191231  295952 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:31:13.478734  295952 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:31:14.082514  295952 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:31:14.083166  295952 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:31:14.086016  295952 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:31:14.089461  295952 out.go:252]   - Booting up control plane ...
	I1018 09:31:14.089566  295952 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:31:14.089650  295952 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:31:14.090483  295952 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:31:14.106342  295952 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:31:14.106694  295952 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:31:14.114589  295952 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:31:14.114946  295952 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:31:14.115205  295952 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:31:14.245204  295952 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:31:14.245331  295952 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:31:15.746758  295952 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501678606s
	I1018 09:31:15.750557  295952 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:31:15.750787  295952 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 09:31:15.751025  295952 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:31:15.751248  295952 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:31:18.348726  295952 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.596955838s
	I1018 09:31:20.118166  295952 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.366631875s
	I1018 09:31:22.253935  295952 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502844159s
	I1018 09:31:22.273890  295952 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:31:22.289979  295952 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:31:22.304645  295952 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:31:22.304884  295952 kubeadm.go:318] [mark-control-plane] Marking the node addons-006674 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:31:22.321327  295952 kubeadm.go:318] [bootstrap-token] Using token: j44vbg.trsy7q1sq403c6an
	I1018 09:31:22.324840  295952 out.go:252]   - Configuring RBAC rules ...
	I1018 09:31:22.324993  295952 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:31:22.330485  295952 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:31:22.340931  295952 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:31:22.347168  295952 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:31:22.351151  295952 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:31:22.357698  295952 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:31:22.662352  295952 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:31:23.103211  295952 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:31:23.661284  295952 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:31:23.662397  295952 kubeadm.go:318] 
	I1018 09:31:23.662471  295952 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:31:23.662481  295952 kubeadm.go:318] 
	I1018 09:31:23.662563  295952 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:31:23.662571  295952 kubeadm.go:318] 
	I1018 09:31:23.662598  295952 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:31:23.662682  295952 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:31:23.662739  295952 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:31:23.662747  295952 kubeadm.go:318] 
	I1018 09:31:23.662804  295952 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:31:23.662811  295952 kubeadm.go:318] 
	I1018 09:31:23.662861  295952 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:31:23.662868  295952 kubeadm.go:318] 
	I1018 09:31:23.662922  295952 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:31:23.663003  295952 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:31:23.663079  295952 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:31:23.663088  295952 kubeadm.go:318] 
	I1018 09:31:23.663175  295952 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:31:23.663258  295952 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:31:23.663266  295952 kubeadm.go:318] 
	I1018 09:31:23.663353  295952 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token j44vbg.trsy7q1sq403c6an \
	I1018 09:31:23.663465  295952 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 09:31:23.663710  295952 kubeadm.go:318] 	--control-plane 
	I1018 09:31:23.663723  295952 kubeadm.go:318] 
	I1018 09:31:23.663814  295952 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:31:23.663819  295952 kubeadm.go:318] 
	I1018 09:31:23.663915  295952 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token j44vbg.trsy7q1sq403c6an \
	I1018 09:31:23.664024  295952 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 09:31:23.667404  295952 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 09:31:23.667645  295952 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 09:31:23.667761  295952 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:31:23.667781  295952 cni.go:84] Creating CNI manager for ""
	I1018 09:31:23.667789  295952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:31:23.671006  295952 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:31:23.673881  295952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:31:23.678086  295952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:31:23.678107  295952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:31:23.691303  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:31:23.995467  295952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:31:23.995596  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:23.995706  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-006674 minikube.k8s.io/updated_at=2025_10_18T09_31_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=addons-006674 minikube.k8s.io/primary=true
	I1018 09:31:24.188796  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:24.188878  295952 ops.go:34] apiserver oom_adj: -16
	I1018 09:31:24.689559  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:25.189776  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:25.689826  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:26.188957  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:26.689542  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:27.189235  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:27.270689  295952 kubeadm.go:1113] duration metric: took 3.275137685s to wait for elevateKubeSystemPrivileges
	I1018 09:31:27.270723  295952 kubeadm.go:402] duration metric: took 21.655190172s to StartCluster
	I1018 09:31:27.270743  295952 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:27.270860  295952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:31:27.271244  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:27.272096  295952 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:31:27.272239  295952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:31:27.272497  295952 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:27.272540  295952 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 09:31:27.272640  295952 addons.go:69] Setting yakd=true in profile "addons-006674"
	I1018 09:31:27.272673  295952 addons.go:238] Setting addon yakd=true in "addons-006674"
	I1018 09:31:27.272698  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.273229  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.273582  295952 addons.go:69] Setting inspektor-gadget=true in profile "addons-006674"
	I1018 09:31:27.273600  295952 addons.go:238] Setting addon inspektor-gadget=true in "addons-006674"
	I1018 09:31:27.273625  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.273773  295952 addons.go:69] Setting metrics-server=true in profile "addons-006674"
	I1018 09:31:27.273811  295952 addons.go:238] Setting addon metrics-server=true in "addons-006674"
	I1018 09:31:27.273866  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.274027  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.274427  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.276778  295952 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-006674"
	I1018 09:31:27.277377  295952 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-006674"
	I1018 09:31:27.277466  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.276958  295952 addons.go:69] Setting registry=true in profile "addons-006674"
	I1018 09:31:27.278143  295952 addons.go:238] Setting addon registry=true in "addons-006674"
	I1018 09:31:27.278176  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.278602  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.276973  295952 addons.go:69] Setting registry-creds=true in profile "addons-006674"
	I1018 09:31:27.280161  295952 addons.go:238] Setting addon registry-creds=true in "addons-006674"
	I1018 09:31:27.280226  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.280744  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.281487  295952 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-006674"
	I1018 09:31:27.281526  295952 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-006674"
	I1018 09:31:27.281558  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.282098  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.276979  295952 addons.go:69] Setting storage-provisioner=true in profile "addons-006674"
	I1018 09:31:27.290431  295952 addons.go:238] Setting addon storage-provisioner=true in "addons-006674"
	I1018 09:31:27.290479  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.276985  295952 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-006674"
	I1018 09:31:27.276991  295952 addons.go:69] Setting volcano=true in profile "addons-006674"
	I1018 09:31:27.290787  295952 addons.go:238] Setting addon volcano=true in "addons-006674"
	I1018 09:31:27.276997  295952 addons.go:69] Setting volumesnapshots=true in profile "addons-006674"
	I1018 09:31:27.290868  295952 addons.go:238] Setting addon volumesnapshots=true in "addons-006674"
	I1018 09:31:27.290885  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.277076  295952 out.go:179] * Verifying Kubernetes components...
	I1018 09:31:27.297502  295952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:27.291504  295952 addons.go:69] Setting cloud-spanner=true in profile "addons-006674"
	I1018 09:31:27.297675  295952 addons.go:238] Setting addon cloud-spanner=true in "addons-006674"
	I1018 09:31:27.297789  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.298254  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291516  295952 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-006674"
	I1018 09:31:27.305944  295952 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-006674"
	I1018 09:31:27.305980  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.306444  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291523  295952 addons.go:69] Setting default-storageclass=true in profile "addons-006674"
	I1018 09:31:27.326092  295952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-006674"
	I1018 09:31:27.326526  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291530  295952 addons.go:69] Setting gcp-auth=true in profile "addons-006674"
	I1018 09:31:27.338541  295952 mustload.go:65] Loading cluster: addons-006674
	I1018 09:31:27.338791  295952 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:27.339103  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291535  295952 addons.go:69] Setting ingress=true in profile "addons-006674"
	I1018 09:31:27.351428  295952 addons.go:238] Setting addon ingress=true in "addons-006674"
	I1018 09:31:27.351504  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.352023  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291541  295952 addons.go:69] Setting ingress-dns=true in profile "addons-006674"
	I1018 09:31:27.371542  295952 addons.go:238] Setting addon ingress-dns=true in "addons-006674"
	I1018 09:31:27.371651  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.372141  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.292399  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.398657  295952 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 09:31:27.401484  295952 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 09:31:27.401511  295952 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 09:31:27.290721  295952 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-006674"
	I1018 09:31:27.401586  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.401864  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.292409  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.407074  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.292859  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.427670  295952 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 09:31:27.371486  295952 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 09:31:27.291810  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.469263  295952 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 09:31:27.469464  295952 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 09:31:27.496724  295952 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 09:31:27.500124  295952 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 09:31:27.505264  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 09:31:27.505296  295952 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 09:31:27.505365  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.505560  295952 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 09:31:27.505573  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 09:31:27.505617  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.512359  295952 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 09:31:27.512385  295952 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 09:31:27.512458  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.517489  295952 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 09:31:27.517564  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 09:31:27.517667  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.550500  295952 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 09:31:27.550525  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 09:31:27.550598  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.575026  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 09:31:27.596326  295952 addons.go:238] Setting addon default-storageclass=true in "addons-006674"
	I1018 09:31:27.596376  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.596809  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.597863  295952 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 09:31:27.600858  295952 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 09:31:27.600881  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 09:31:27.600948  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.645268  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 09:31:27.648222  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 09:31:27.651063  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 09:31:27.653992  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 09:31:27.654212  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.664115  295952 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 09:31:27.667901  295952 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 09:31:27.667932  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 09:31:27.668034  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.672733  295952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:27.679776  295952 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-006674"
	I1018 09:31:27.679822  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.680241  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.657330  295952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:31:27.682163  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.690523  295952 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 09:31:27.690710  295952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:27.705407  295952 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 09:31:27.708855  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 09:31:27.711794  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 09:31:27.712839  295952 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 09:31:27.719226  295952 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 09:31:27.719251  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 09:31:27.719321  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.734687  295952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:27.734707  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:31:27.734778  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.737082  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 09:31:27.738524  295952 node_ready.go:35] waiting up to 6m0s for node "addons-006674" to be "Ready" ...
	I1018 09:31:27.754926  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 09:31:27.754949  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 09:31:27.755038  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.774321  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 09:31:27.777782  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 09:31:27.777812  295952 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 09:31:27.777890  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	W1018 09:31:27.790445  295952 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 09:31:27.806393  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.836119  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.838326  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.839132  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.840680  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.841915  295952 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 09:31:27.845077  295952 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 09:31:27.845135  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 09:31:27.845266  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.863739  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.864604  295952 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:27.864625  295952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:31:27.864680  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.909888  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.921951  295952 out.go:179]   - Using image docker.io/busybox:stable
	I1018 09:31:27.924765  295952 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 09:31:27.929309  295952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 09:31:27.929333  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 09:31:27.929411  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.949238  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.954956  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.955960  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.966824  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.996933  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:28.004154  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	W1018 09:31:28.009541  295952 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 09:31:28.009572  295952 retry.go:31] will retry after 363.527676ms: ssh: handshake failed: EOF
	I1018 09:31:28.014604  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	W1018 09:31:28.016085  295952 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 09:31:28.016108  295952 retry.go:31] will retry after 239.787661ms: ssh: handshake failed: EOF
	I1018 09:31:28.146671  295952 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:28.146743  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 09:31:28.372498  295952 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 09:31:28.372570  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 09:31:28.443884  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:28.539511  295952 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 09:31:28.539590  295952 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 09:31:28.607598  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 09:31:28.668563  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 09:31:28.703849  295952 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 09:31:28.703924  295952 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 09:31:28.714046  295952 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 09:31:28.714122  295952 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 09:31:28.728387  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 09:31:28.766733  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 09:31:28.826691  295952 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 09:31:28.826713  295952 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 09:31:28.847833  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 09:31:28.853888  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 09:31:28.880435  295952 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 09:31:28.880505  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 09:31:28.903209  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 09:31:28.903282  295952 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 09:31:28.920130  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 09:31:28.920219  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 09:31:28.942435  295952 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 09:31:28.942513  295952 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 09:31:28.976272  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:29.023455  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 09:31:29.095335  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 09:31:29.096374  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:29.099198  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 09:31:29.099219  295952 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 09:31:29.102514  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 09:31:29.102590  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 09:31:29.144760  295952 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 09:31:29.144835  295952 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 09:31:29.225404  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 09:31:29.304179  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 09:31:29.304203  295952 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 09:31:29.306432  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 09:31:29.306452  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 09:31:29.352677  295952 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.670959873s)
	I1018 09:31:29.352709  295952 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 09:31:29.413137  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 09:31:29.413171  295952 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 09:31:29.522628  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 09:31:29.522653  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 09:31:29.606431  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 09:31:29.606457  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 09:31:29.606744  295952 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 09:31:29.606760  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 09:31:29.697258  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 09:31:29.697284  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	W1018 09:31:29.742189  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:29.857291  295952 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-006674" context rescaled to 1 replicas
	I1018 09:31:29.883535  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 09:31:29.899837  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 09:31:29.929393  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 09:31:29.929417  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 09:31:30.345940  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 09:31:30.345965  295952 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 09:31:30.600651  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 09:31:30.600676  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 09:31:30.892963  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 09:31:30.892991  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 09:31:31.181670  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 09:31:31.181749  295952 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 09:31:31.471779  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1018 09:31:31.758503  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:32.842165  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.3981963s)
	W1018 09:31:32.842196  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:32.842214  295952 retry.go:31] will retry after 343.817823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:32.842264  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.234590933s)
	I1018 09:31:32.842310  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.173677536s)
	I1018 09:31:32.842351  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.113897096s)
	I1018 09:31:32.842541  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.075736885s)
	I1018 09:31:33.186770  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 09:31:33.789097  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:33.883400  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.029434409s)
	I1018 09:31:33.883477  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.907133777s)
	I1018 09:31:33.883505  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.035604487s)
	I1018 09:31:33.883515  295952 addons.go:479] Verifying addon ingress=true in "addons-006674"
	I1018 09:31:33.883623  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.787229814s)
	I1018 09:31:33.883827  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.658349187s)
	I1018 09:31:33.883578  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.860029536s)
	I1018 09:31:33.884107  295952 addons.go:479] Verifying addon metrics-server=true in "addons-006674"
	I1018 09:31:33.883605  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.788192757s)
	I1018 09:31:33.884124  295952 addons.go:479] Verifying addon registry=true in "addons-006674"
	I1018 09:31:33.886837  295952 out.go:179] * Verifying registry addon...
	I1018 09:31:33.886944  295952 out.go:179] * Verifying ingress addon...
	I1018 09:31:33.891383  295952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 09:31:33.892245  295952 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 09:31:33.922498  295952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 09:31:33.922523  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:33.930455  295952 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 09:31:33.930481  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:34.007203  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.123624081s)
	W1018 09:31:34.007243  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 09:31:34.007264  295952 retry.go:31] will retry after 228.333502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 09:31:34.007311  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.10744731s)
	I1018 09:31:34.011298  295952 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-006674 service yakd-dashboard -n yakd-dashboard
	
	I1018 09:31:34.236083  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 09:31:34.402723  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:34.403011  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:34.603454  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.131585359s)
	I1018 09:31:34.603488  295952 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-006674"
	I1018 09:31:34.606654  295952 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 09:31:34.611129  295952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 09:31:34.620750  295952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 09:31:34.620772  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:34.715935  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.529068326s)
	W1018 09:31:34.715969  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:34.715988  295952 retry.go:31] will retry after 345.302342ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:34.896602  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:34.896758  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:35.061968  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:35.115712  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:35.297841  295952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 09:31:35.297970  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:35.320364  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:35.397781  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:35.397845  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:35.438319  295952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 09:31:35.454598  295952 addons.go:238] Setting addon gcp-auth=true in "addons-006674"
	I1018 09:31:35.454652  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:35.455110  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:35.474048  295952 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 09:31:35.474107  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:35.494138  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:35.615032  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:35.896142  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:35.897019  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:36.114718  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 09:31:36.241283  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:36.395544  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:36.395626  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:36.614730  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:36.897397  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:36.897868  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:37.115136  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:37.147839  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.911678007s)
	I1018 09:31:37.147972  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.085972877s)
	W1018 09:31:37.148010  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:37.148028  295952 retry.go:31] will retry after 743.995265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:37.148029  295952 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.673953142s)
	I1018 09:31:37.151275  295952 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 09:31:37.154126  295952 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 09:31:37.156860  295952 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 09:31:37.156878  295952 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 09:31:37.170223  295952 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 09:31:37.170295  295952 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 09:31:37.184638  295952 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 09:31:37.184660  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 09:31:37.198046  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 09:31:37.396161  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:37.396556  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:37.618799  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:37.677495  295952 addons.go:479] Verifying addon gcp-auth=true in "addons-006674"
	I1018 09:31:37.682086  295952 out.go:179] * Verifying gcp-auth addon...
	I1018 09:31:37.685769  295952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 09:31:37.723523  295952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 09:31:37.723548  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:37.892319  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:37.896692  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:37.897338  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:38.114988  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:38.189435  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:38.241559  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:38.397247  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:38.397762  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:38.614945  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:38.689702  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:38.690284  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:38.690311  295952 retry.go:31] will retry after 651.86289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:38.894902  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:38.895597  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:39.115351  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:39.189396  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:39.342778  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:39.396615  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:39.396847  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:39.615325  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:39.689562  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:39.895985  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:39.896881  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:40.114878  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 09:31:40.166225  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:40.166259  295952 retry.go:31] will retry after 1.693297533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:40.189384  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:40.242316  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:40.395161  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:40.395996  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:40.615369  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:40.716106  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:40.894848  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:40.895077  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:41.114566  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:41.189142  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:41.395218  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:41.395371  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:41.614424  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:41.689162  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:41.860241  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:41.896798  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:41.897456  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:42.115582  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:42.190689  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:42.242808  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:42.396279  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:42.397580  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:42.614296  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 09:31:42.673566  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:42.673598  295952 retry.go:31] will retry after 1.299020929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:42.690011  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:42.895161  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:42.895528  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:43.114823  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:43.189179  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:43.395252  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:43.395540  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:43.614921  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:43.689332  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:43.894841  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:43.895312  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:43.973794  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:44.115706  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:44.188881  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:44.396133  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:44.396640  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:44.614862  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:44.689149  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:44.742586  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	W1018 09:31:44.765377  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:44.765447  295952 retry.go:31] will retry after 2.199782569s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:44.894563  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:44.895900  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:45.128597  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:45.194270  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:45.395852  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:45.396326  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:45.614700  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:45.689594  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:45.894577  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:45.895168  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:46.114615  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:46.189496  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:46.394184  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:46.395355  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:46.614078  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:46.688969  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:46.895254  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:46.895411  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:46.965725  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:47.115641  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:47.189603  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:47.241563  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:47.396879  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:47.397255  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:47.615528  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:47.689311  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:47.764300  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:47.764418  295952 retry.go:31] will retry after 2.641135294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:47.894342  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:47.895452  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:48.114998  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:48.188885  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:48.394319  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:48.394717  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:48.615207  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:48.689655  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:48.895746  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:48.895837  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:49.115008  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:49.189281  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:49.242290  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:49.395787  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:49.395604  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:49.614725  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:49.689535  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:49.897073  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:49.897160  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:50.114193  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:50.189562  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:50.394073  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:50.395121  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:50.406489  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:50.615330  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:50.689483  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:50.894941  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:50.896310  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:51.118505  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:51.189866  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:51.217326  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:51.217377  295952 retry.go:31] will retry after 4.387535304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:51.395275  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:51.395827  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:51.614679  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:51.689581  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:51.742234  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:51.894687  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:51.895366  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:52.114399  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:52.189127  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:52.394991  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:52.395608  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:52.614589  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:52.689531  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:52.895376  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:52.895512  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:53.114810  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:53.190125  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:53.394675  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:53.395723  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:53.615171  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:53.689321  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:53.742287  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:53.895654  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:53.895990  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:54.115022  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:54.195461  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:54.395321  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:54.396127  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:54.614290  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:54.689336  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:54.895545  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:54.895769  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:55.115046  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:55.189260  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:55.395007  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:55.395127  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:55.605165  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:55.615815  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:55.688750  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:55.897249  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:55.897585  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:56.114670  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:56.189401  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:56.242490  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:56.395985  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:56.396369  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 09:31:56.426252  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:56.426297  295952 retry.go:31] will retry after 12.838707612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:56.614263  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:56.689248  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:56.895135  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:56.895616  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:57.128449  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:57.189345  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:57.395494  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:57.395640  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:57.614720  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:57.688641  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:57.894937  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:57.895261  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:58.114173  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:58.188923  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:58.394652  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:58.395771  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:58.615121  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:58.689008  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:58.741867  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:58.895420  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:58.895792  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:59.115743  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:59.188706  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:59.394672  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:59.394759  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:59.614700  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:59.688575  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:59.894519  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:59.895205  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:00.121905  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:00.192788  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:00.398432  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:00.398489  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:00.615124  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:00.688883  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:32:00.742002  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:32:00.895290  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:00.895782  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:01.115541  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:01.189563  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:01.394235  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:01.395300  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:01.614400  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:01.689502  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:01.896164  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:01.895952  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:02.114541  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:02.189741  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:02.395707  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:02.396083  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:02.615374  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:02.689099  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:32:02.742193  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:32:02.895190  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:02.895491  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:03.114891  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:03.189593  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:03.394157  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:03.395638  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:03.615085  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:03.689141  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:03.895281  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:03.895656  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:04.115123  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:04.191094  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:04.395544  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:04.395692  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:04.614918  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:04.688631  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:04.894825  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:04.895703  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:05.114809  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:05.189005  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:32:05.242338  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:32:05.395728  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:05.395783  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:05.614964  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:05.688755  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:05.894548  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:05.895670  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:06.114769  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:06.189598  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:06.395608  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:06.395795  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:06.615183  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:06.688901  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:06.894376  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:06.895812  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:07.116134  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:07.188951  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:07.394567  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:07.395257  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:07.614316  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:07.689171  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:32:07.741835  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:32:07.895072  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:07.895584  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:08.114639  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:08.189222  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:08.395092  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:08.395411  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:08.614479  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:08.689358  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:08.895316  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:08.895669  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:09.114800  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:09.189675  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:09.265593  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:32:09.493204  295952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 09:32:09.493277  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:09.493468  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:09.621756  295952 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 09:32:09.621822  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:09.709455  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:09.824272  295952 node_ready.go:49] node "addons-006674" is "Ready"
	I1018 09:32:09.824350  295952 node_ready.go:38] duration metric: took 42.0858024s for node "addons-006674" to be "Ready" ...
	I1018 09:32:09.824387  295952 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:32:09.824475  295952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:32:09.911694  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:09.911821  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:10.124358  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:10.203061  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:10.397066  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:10.397584  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:10.625714  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:10.729772  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:10.897350  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:10.897908  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:11.002107  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.736473849s)
	W1018 09:32:11.002188  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:11.002261  295952 retry.go:31] will retry after 11.564156757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:11.002306  295952 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.177730308s)
	I1018 09:32:11.002343  295952 api_server.go:72] duration metric: took 43.73020793s to wait for apiserver process to appear ...
	I1018 09:32:11.002364  295952 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:32:11.002409  295952 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 09:32:11.011863  295952 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 09:32:11.013565  295952 api_server.go:141] control plane version: v1.34.1
	I1018 09:32:11.013639  295952 api_server.go:131] duration metric: took 11.248909ms to wait for apiserver health ...
	I1018 09:32:11.013663  295952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:32:11.032455  295952 system_pods.go:59] 19 kube-system pods found
	I1018 09:32:11.032547  295952 system_pods.go:61] "coredns-66bc5c9577-kj5jb" [97c49f44-8c6f-4c14-a90f-31dfda93a372] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:11.032575  295952 system_pods.go:61] "csi-hostpath-attacher-0" [d324748f-0916-4247-b8da-42b067ee5ff2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 09:32:11.032599  295952 system_pods.go:61] "csi-hostpath-resizer-0" [05ce0eee-4f78-44d5-b868-c8c9f13f276b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 09:32:11.032632  295952 system_pods.go:61] "csi-hostpathplugin-rswxb" [feddf8af-e1d0-4f04-a39a-eedf70a898ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 09:32:11.032653  295952 system_pods.go:61] "etcd-addons-006674" [c14c573d-1612-42d5-87f8-4bd642899fae] Running
	I1018 09:32:11.032675  295952 system_pods.go:61] "kindnet-h49vl" [6a1383f0-1850-4dff-991e-4fa71596bb58] Running
	I1018 09:32:11.032695  295952 system_pods.go:61] "kube-apiserver-addons-006674" [e5eff75b-55ea-4fbb-a83d-cc8550c66472] Running
	I1018 09:32:11.032721  295952 system_pods.go:61] "kube-controller-manager-addons-006674" [6e9cccba-4dab-456b-b750-0ac8893a4371] Running
	I1018 09:32:11.032755  295952 system_pods.go:61] "kube-ingress-dns-minikube" [5d722dca-00c8-447d-a43c-acdb7b5482e3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 09:32:11.032775  295952 system_pods.go:61] "kube-proxy-k5bfv" [ecc37a01-e2c2-4a4c-a272-ac37b4cd96f3] Running
	I1018 09:32:11.032796  295952 system_pods.go:61] "kube-scheduler-addons-006674" [ac4de840-6b5c-491a-9cbc-2a080dbc17bf] Running
	I1018 09:32:11.032817  295952 system_pods.go:61] "metrics-server-85b7d694d7-szvm5" [7f1c3285-5e41-444e-a773-3a86f80ec0c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 09:32:11.032840  295952 system_pods.go:61] "nvidia-device-plugin-daemonset-j658f" [a582f724-b46c-4377-b626-fcf59ae12980] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 09:32:11.032862  295952 system_pods.go:61] "registry-6b586f9694-flkkz" [cd38f302-4660-4066-897a-e2246722c55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 09:32:11.032895  295952 system_pods.go:61] "registry-creds-764b6fb674-tjsdw" [23cd49e2-ec97-44a9-9bd9-370ba2b403c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 09:32:11.032917  295952 system_pods.go:61] "registry-proxy-46rp2" [99a95a84-cd1c-42d2-b8a8-a0bb70a90f31] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 09:32:11.032939  295952 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9pgqt" [afec06f4-3213-44ef-a323-a658ff117c82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.032963  295952 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rfbdb" [05444b46-9758-438d-92c4-5bf39c7165b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.032996  295952 system_pods.go:61] "storage-provisioner" [b7652d26-3d30-439f-a886-8dc4a69c9f1e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:11.033018  295952 system_pods.go:74] duration metric: took 19.330228ms to wait for pod list to return data ...
	I1018 09:32:11.033040  295952 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:32:11.039295  295952 default_sa.go:45] found service account: "default"
	I1018 09:32:11.039379  295952 default_sa.go:55] duration metric: took 6.316341ms for default service account to be created ...
	I1018 09:32:11.039404  295952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:32:11.045254  295952 system_pods.go:86] 19 kube-system pods found
	I1018 09:32:11.045351  295952 system_pods.go:89] "coredns-66bc5c9577-kj5jb" [97c49f44-8c6f-4c14-a90f-31dfda93a372] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:11.045383  295952 system_pods.go:89] "csi-hostpath-attacher-0" [d324748f-0916-4247-b8da-42b067ee5ff2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 09:32:11.045407  295952 system_pods.go:89] "csi-hostpath-resizer-0" [05ce0eee-4f78-44d5-b868-c8c9f13f276b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 09:32:11.045445  295952 system_pods.go:89] "csi-hostpathplugin-rswxb" [feddf8af-e1d0-4f04-a39a-eedf70a898ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 09:32:11.045474  295952 system_pods.go:89] "etcd-addons-006674" [c14c573d-1612-42d5-87f8-4bd642899fae] Running
	I1018 09:32:11.045495  295952 system_pods.go:89] "kindnet-h49vl" [6a1383f0-1850-4dff-991e-4fa71596bb58] Running
	I1018 09:32:11.045524  295952 system_pods.go:89] "kube-apiserver-addons-006674" [e5eff75b-55ea-4fbb-a83d-cc8550c66472] Running
	I1018 09:32:11.045544  295952 system_pods.go:89] "kube-controller-manager-addons-006674" [6e9cccba-4dab-456b-b750-0ac8893a4371] Running
	I1018 09:32:11.045566  295952 system_pods.go:89] "kube-ingress-dns-minikube" [5d722dca-00c8-447d-a43c-acdb7b5482e3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 09:32:11.045585  295952 system_pods.go:89] "kube-proxy-k5bfv" [ecc37a01-e2c2-4a4c-a272-ac37b4cd96f3] Running
	I1018 09:32:11.045613  295952 system_pods.go:89] "kube-scheduler-addons-006674" [ac4de840-6b5c-491a-9cbc-2a080dbc17bf] Running
	I1018 09:32:11.045632  295952 system_pods.go:89] "metrics-server-85b7d694d7-szvm5" [7f1c3285-5e41-444e-a773-3a86f80ec0c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 09:32:11.045655  295952 system_pods.go:89] "nvidia-device-plugin-daemonset-j658f" [a582f724-b46c-4377-b626-fcf59ae12980] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 09:32:11.045689  295952 system_pods.go:89] "registry-6b586f9694-flkkz" [cd38f302-4660-4066-897a-e2246722c55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 09:32:11.045714  295952 system_pods.go:89] "registry-creds-764b6fb674-tjsdw" [23cd49e2-ec97-44a9-9bd9-370ba2b403c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 09:32:11.045735  295952 system_pods.go:89] "registry-proxy-46rp2" [99a95a84-cd1c-42d2-b8a8-a0bb70a90f31] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 09:32:11.045769  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9pgqt" [afec06f4-3213-44ef-a323-a658ff117c82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.045792  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rfbdb" [05444b46-9758-438d-92c4-5bf39c7165b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.045823  295952 system_pods.go:89] "storage-provisioner" [b7652d26-3d30-439f-a886-8dc4a69c9f1e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:11.045865  295952 retry.go:31] will retry after 207.462973ms: missing components: kube-dns
	I1018 09:32:11.115954  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:11.192279  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:11.258397  295952 system_pods.go:86] 19 kube-system pods found
	I1018 09:32:11.258481  295952 system_pods.go:89] "coredns-66bc5c9577-kj5jb" [97c49f44-8c6f-4c14-a90f-31dfda93a372] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:11.258505  295952 system_pods.go:89] "csi-hostpath-attacher-0" [d324748f-0916-4247-b8da-42b067ee5ff2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 09:32:11.258547  295952 system_pods.go:89] "csi-hostpath-resizer-0" [05ce0eee-4f78-44d5-b868-c8c9f13f276b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 09:32:11.258569  295952 system_pods.go:89] "csi-hostpathplugin-rswxb" [feddf8af-e1d0-4f04-a39a-eedf70a898ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 09:32:11.258589  295952 system_pods.go:89] "etcd-addons-006674" [c14c573d-1612-42d5-87f8-4bd642899fae] Running
	I1018 09:32:11.258610  295952 system_pods.go:89] "kindnet-h49vl" [6a1383f0-1850-4dff-991e-4fa71596bb58] Running
	I1018 09:32:11.258643  295952 system_pods.go:89] "kube-apiserver-addons-006674" [e5eff75b-55ea-4fbb-a83d-cc8550c66472] Running
	I1018 09:32:11.258663  295952 system_pods.go:89] "kube-controller-manager-addons-006674" [6e9cccba-4dab-456b-b750-0ac8893a4371] Running
	I1018 09:32:11.258686  295952 system_pods.go:89] "kube-ingress-dns-minikube" [5d722dca-00c8-447d-a43c-acdb7b5482e3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 09:32:11.258715  295952 system_pods.go:89] "kube-proxy-k5bfv" [ecc37a01-e2c2-4a4c-a272-ac37b4cd96f3] Running
	I1018 09:32:11.258732  295952 system_pods.go:89] "kube-scheduler-addons-006674" [ac4de840-6b5c-491a-9cbc-2a080dbc17bf] Running
	I1018 09:32:11.258761  295952 system_pods.go:89] "metrics-server-85b7d694d7-szvm5" [7f1c3285-5e41-444e-a773-3a86f80ec0c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 09:32:11.258791  295952 system_pods.go:89] "nvidia-device-plugin-daemonset-j658f" [a582f724-b46c-4377-b626-fcf59ae12980] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 09:32:11.258812  295952 system_pods.go:89] "registry-6b586f9694-flkkz" [cd38f302-4660-4066-897a-e2246722c55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 09:32:11.258833  295952 system_pods.go:89] "registry-creds-764b6fb674-tjsdw" [23cd49e2-ec97-44a9-9bd9-370ba2b403c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 09:32:11.258863  295952 system_pods.go:89] "registry-proxy-46rp2" [99a95a84-cd1c-42d2-b8a8-a0bb70a90f31] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 09:32:11.258891  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9pgqt" [afec06f4-3213-44ef-a323-a658ff117c82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.258911  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rfbdb" [05444b46-9758-438d-92c4-5bf39c7165b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.258943  295952 system_pods.go:89] "storage-provisioner" [b7652d26-3d30-439f-a886-8dc4a69c9f1e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:11.258973  295952 retry.go:31] will retry after 251.1907ms: missing components: kube-dns
	I1018 09:32:11.399846  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:11.405050  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:11.516559  295952 system_pods.go:86] 19 kube-system pods found
	I1018 09:32:11.516641  295952 system_pods.go:89] "coredns-66bc5c9577-kj5jb" [97c49f44-8c6f-4c14-a90f-31dfda93a372] Running
	I1018 09:32:11.516677  295952 system_pods.go:89] "csi-hostpath-attacher-0" [d324748f-0916-4247-b8da-42b067ee5ff2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 09:32:11.516700  295952 system_pods.go:89] "csi-hostpath-resizer-0" [05ce0eee-4f78-44d5-b868-c8c9f13f276b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 09:32:11.516725  295952 system_pods.go:89] "csi-hostpathplugin-rswxb" [feddf8af-e1d0-4f04-a39a-eedf70a898ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 09:32:11.516770  295952 system_pods.go:89] "etcd-addons-006674" [c14c573d-1612-42d5-87f8-4bd642899fae] Running
	I1018 09:32:11.516790  295952 system_pods.go:89] "kindnet-h49vl" [6a1383f0-1850-4dff-991e-4fa71596bb58] Running
	I1018 09:32:11.516811  295952 system_pods.go:89] "kube-apiserver-addons-006674" [e5eff75b-55ea-4fbb-a83d-cc8550c66472] Running
	I1018 09:32:11.516840  295952 system_pods.go:89] "kube-controller-manager-addons-006674" [6e9cccba-4dab-456b-b750-0ac8893a4371] Running
	I1018 09:32:11.516868  295952 system_pods.go:89] "kube-ingress-dns-minikube" [5d722dca-00c8-447d-a43c-acdb7b5482e3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 09:32:11.516889  295952 system_pods.go:89] "kube-proxy-k5bfv" [ecc37a01-e2c2-4a4c-a272-ac37b4cd96f3] Running
	I1018 09:32:11.516919  295952 system_pods.go:89] "kube-scheduler-addons-006674" [ac4de840-6b5c-491a-9cbc-2a080dbc17bf] Running
	I1018 09:32:11.516946  295952 system_pods.go:89] "metrics-server-85b7d694d7-szvm5" [7f1c3285-5e41-444e-a773-3a86f80ec0c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 09:32:11.516970  295952 system_pods.go:89] "nvidia-device-plugin-daemonset-j658f" [a582f724-b46c-4377-b626-fcf59ae12980] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 09:32:11.517002  295952 system_pods.go:89] "registry-6b586f9694-flkkz" [cd38f302-4660-4066-897a-e2246722c55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 09:32:11.517023  295952 system_pods.go:89] "registry-creds-764b6fb674-tjsdw" [23cd49e2-ec97-44a9-9bd9-370ba2b403c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 09:32:11.517052  295952 system_pods.go:89] "registry-proxy-46rp2" [99a95a84-cd1c-42d2-b8a8-a0bb70a90f31] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 09:32:11.517081  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9pgqt" [afec06f4-3213-44ef-a323-a658ff117c82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.517103  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rfbdb" [05444b46-9758-438d-92c4-5bf39c7165b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.517132  295952 system_pods.go:89] "storage-provisioner" [b7652d26-3d30-439f-a886-8dc4a69c9f1e] Running
	I1018 09:32:11.517163  295952 system_pods.go:126] duration metric: took 477.740758ms to wait for k8s-apps to be running ...
	I1018 09:32:11.517206  295952 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:32:11.517298  295952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:32:11.534818  295952 system_svc.go:56] duration metric: took 17.602576ms WaitForService to wait for kubelet
	I1018 09:32:11.534916  295952 kubeadm.go:586] duration metric: took 44.262770068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:32:11.534950  295952 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:32:11.538438  295952 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:32:11.538518  295952 node_conditions.go:123] node cpu capacity is 2
	I1018 09:32:11.538547  295952 node_conditions.go:105] duration metric: took 3.578196ms to run NodePressure ...
	I1018 09:32:11.538573  295952 start.go:241] waiting for startup goroutines ...
	I1018 09:32:11.614943  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:11.689045  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:11.905081  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:11.905334  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:12.114599  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:12.189271  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:12.406519  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:12.414706  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:12.615914  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:12.688676  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:12.895613  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:12.895784  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:13.114849  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:13.188195  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:13.404052  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:13.404499  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:13.615731  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:13.688637  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:13.897453  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:13.897942  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:14.115135  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:14.188866  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:14.395759  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:14.395902  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:14.616724  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:14.688446  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:14.898034  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:14.898530  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:15.115347  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:15.189631  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:15.396961  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:15.397355  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:15.614847  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:15.689884  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:15.896934  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:15.897355  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:16.114951  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:16.189164  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:16.396434  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:16.396874  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:16.615592  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:16.689712  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:16.896695  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:16.897492  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:17.115036  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:17.190213  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:17.396004  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:17.396636  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:17.616141  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:17.689598  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:17.896899  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:17.897283  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:18.114399  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:18.189761  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:18.396903  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:18.397283  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:18.617139  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:18.717319  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:18.896113  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:18.896248  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:19.114885  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:19.189082  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:19.396717  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:19.397866  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:19.616194  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:19.689674  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:19.896985  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:19.897417  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:20.115230  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:20.190027  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:20.396892  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:20.397240  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:20.614768  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:20.688578  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:20.900667  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:20.901106  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:21.114702  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:21.194592  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:21.396970  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:21.397675  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:21.616221  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:21.689708  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:21.896577  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:21.896915  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:22.114496  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:22.190318  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:22.395917  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:22.396095  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:22.567495  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:32:22.615611  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:22.689373  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:22.896060  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:22.896190  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:23.114459  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:23.189257  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:23.395916  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:23.396457  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:23.615227  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:23.689977  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:23.779619  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.212086726s)
	W1018 09:32:23.779656  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:23.779675  295952 retry.go:31] will retry after 22.639093049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:23.895392  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:23.896375  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:24.114868  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:24.189669  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:24.394475  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:24.395953  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:24.615520  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:24.689110  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:24.896508  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:24.896601  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:25.114989  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:25.189328  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:25.397122  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:25.397380  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:25.614508  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:25.689310  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:25.894430  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:25.896651  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:26.115096  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:26.188993  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:26.396240  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:26.396373  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:26.614608  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:26.689389  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:26.896718  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:26.896993  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:27.116917  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:27.189679  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:27.396274  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:27.396805  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:27.615862  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:27.690178  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:27.895827  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:27.896351  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:28.115378  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:28.216354  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:28.396740  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:28.396844  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:28.615657  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:28.717666  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:28.899981  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:28.901369  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:29.115057  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:29.189111  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:29.396258  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:29.396432  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:29.614777  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:29.688814  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:29.897556  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:29.897688  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:30.116115  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:30.189518  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:30.395208  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:30.396385  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:30.615567  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:30.689456  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:30.895949  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:30.896093  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:31.114545  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:31.189793  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:31.395794  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:31.395939  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:31.614203  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:31.689214  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:31.896992  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:31.897439  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:32.114784  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:32.188844  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:32.397082  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:32.397512  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:32.615592  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:32.689875  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:32.895751  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:32.895959  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:33.115331  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:33.189363  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:33.395643  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:33.395794  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:33.615517  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:33.689567  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:33.895362  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:33.896577  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:34.115884  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:34.188798  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:34.396135  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:34.396464  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:34.615189  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:34.688947  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:34.900125  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:34.900780  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:35.117435  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:35.190238  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:35.394473  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:35.396297  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:35.615645  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:35.690088  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:35.895618  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:35.895776  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:36.115770  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:36.189027  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:36.396814  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:36.397581  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:36.615041  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:36.689021  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:36.896684  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:36.897009  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:37.114509  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:37.219904  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:37.395673  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:37.396035  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:37.616240  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:37.716178  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:37.896753  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:37.896970  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:38.115693  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:38.189302  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:38.395554  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:38.396641  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:38.615546  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:38.689544  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:38.896304  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:38.896799  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:39.115699  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:39.189766  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:39.396842  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:39.397444  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:39.615572  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:39.689167  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:39.896051  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:39.896440  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:40.117883  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:40.218829  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:40.396039  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:40.396566  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:40.616054  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:40.688976  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:40.896260  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:40.896411  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:41.117149  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:41.191684  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:41.396282  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:41.396949  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:41.614800  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:41.689504  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:41.897406  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:41.897736  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:42.120142  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:42.189903  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:42.395895  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:42.396076  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:42.614948  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:42.688905  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:42.897276  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:42.897446  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:43.115485  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:43.189305  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:43.408451  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:43.416016  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:43.615422  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:43.689731  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:43.896842  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:43.897390  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:44.115417  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:44.189992  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:44.396143  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:44.396181  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:44.615046  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:44.688992  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:44.895944  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:44.896152  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:45.138659  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:45.190768  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:45.395990  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:45.396135  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:45.614968  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:45.689479  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:45.896959  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:45.897436  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:46.115635  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:46.216198  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:46.396266  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:46.396427  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:46.419713  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:32:46.616569  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:46.691071  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:46.930899  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:46.931594  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:47.116393  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:47.194940  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:47.396002  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:47.396202  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:47.622265  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:47.689595  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:47.903927  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:47.904421  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:48.114906  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:48.189862  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:48.342754  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.922999268s)
	W1018 09:32:48.342797  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:48.342818  295952 retry.go:31] will retry after 34.0679614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:48.395908  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:48.396052  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:48.614748  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:48.692227  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:48.896593  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:48.897554  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:49.115416  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:49.190201  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:49.396181  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:49.396587  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:49.615211  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:49.689646  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:49.899408  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:49.899553  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:50.115730  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:50.188851  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:50.395546  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:50.396673  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:50.615252  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:50.692352  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:50.897245  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:50.897845  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:51.115576  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:51.190069  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:51.395817  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:51.396303  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:51.614675  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:51.690139  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:51.897607  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:51.897702  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:52.114963  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:52.189924  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:52.396959  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:52.397310  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:52.615170  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:52.689423  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:52.897217  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:52.898059  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:53.117102  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:53.198343  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:53.398619  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:53.399991  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:53.618020  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:53.710913  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:53.897843  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:53.898310  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:54.121088  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:54.189060  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:54.396084  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:54.396190  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:54.614945  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:54.690338  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:54.899503  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:54.899668  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:55.115342  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:55.190723  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:55.395884  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:55.396051  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:55.614758  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:55.695372  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:55.895083  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:55.895670  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:56.122586  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:56.219702  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:56.394758  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:56.396199  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:56.614602  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:56.689083  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:56.896670  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:56.896875  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:57.115792  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:57.192174  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:57.396968  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:57.397684  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:57.617013  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:57.689215  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:57.896931  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:57.897302  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:58.115642  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:58.189861  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:58.396883  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:58.396956  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:58.616374  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:58.690333  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:58.895250  295952 kapi.go:107] duration metric: took 1m25.003865289s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 09:32:58.895925  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:59.115379  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:59.189444  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:59.396019  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:59.622143  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:59.688772  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:59.897306  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:00.115925  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:00.190463  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:00.397368  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:00.615054  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:00.689421  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:00.896116  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:01.116138  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:01.190003  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:01.395963  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:01.615658  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:01.689429  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:01.896495  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:02.115197  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:02.190062  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:02.396147  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:02.616684  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:02.692056  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:02.895922  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:03.122247  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:03.190797  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:03.395930  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:03.614191  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:03.690680  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:03.911673  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:04.119811  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:04.189408  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:04.396682  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:04.616088  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:04.690198  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:04.896303  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:05.117125  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:05.216474  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:05.396080  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:05.614712  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:05.691392  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:05.895705  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:06.115527  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:06.190008  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:06.396336  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:06.614466  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:06.689211  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:06.895614  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:07.115582  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:07.189976  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:07.396506  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:07.614950  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:07.690057  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:07.896148  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:08.118500  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:08.189089  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:08.395393  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:08.614914  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:08.689085  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:08.895940  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:09.115564  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:09.190113  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:09.410455  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:09.620063  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:09.689073  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:09.897018  295952 kapi.go:107] duration metric: took 1m36.004771364s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 09:33:10.116229  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:10.189357  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:10.656775  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:10.693435  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:11.115348  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:11.189764  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:11.614763  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:11.715044  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:12.114722  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:12.192780  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:12.619571  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:12.689917  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:13.116367  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:13.189211  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:13.615659  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:13.690048  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:14.115479  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:14.189516  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:14.614872  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:14.693265  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:15.114701  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:15.189118  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:15.615131  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:15.688629  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:16.115333  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:16.190189  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:16.614301  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:16.689113  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:17.114567  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:17.189970  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:17.615202  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:17.689255  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:18.117584  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:18.189365  295952 kapi.go:107] duration metric: took 1m40.503599602s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 09:33:18.192321  295952 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-006674 cluster.
	I1018 09:33:18.195223  295952 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 09:33:18.198012  295952 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 09:33:18.615155  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:19.115073  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:19.614870  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:20.114898  295952 kapi.go:107] duration metric: took 1m45.503768448s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 09:33:22.411775  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 09:33:23.306200  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 09:33:23.306297  295952 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 09:33:23.311109  295952 out.go:179] * Enabled addons: registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner-rancher, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, default-storageclass, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 09:33:23.314110  295952 addons.go:514] duration metric: took 1m56.041488915s for enable addons: enabled=[registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner-rancher cloud-spanner storage-provisioner nvidia-device-plugin metrics-server default-storageclass yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 09:33:23.314186  295952 start.go:246] waiting for cluster config update ...
	I1018 09:33:23.314209  295952 start.go:255] writing updated cluster config ...
	I1018 09:33:23.314547  295952 ssh_runner.go:195] Run: rm -f paused
	I1018 09:33:23.318795  295952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:23.322965  295952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kj5jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.328121  295952 pod_ready.go:94] pod "coredns-66bc5c9577-kj5jb" is "Ready"
	I1018 09:33:23.328151  295952 pod_ready.go:86] duration metric: took 5.153969ms for pod "coredns-66bc5c9577-kj5jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.330698  295952 pod_ready.go:83] waiting for pod "etcd-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.336124  295952 pod_ready.go:94] pod "etcd-addons-006674" is "Ready"
	I1018 09:33:23.336152  295952 pod_ready.go:86] duration metric: took 5.425994ms for pod "etcd-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.338468  295952 pod_ready.go:83] waiting for pod "kube-apiserver-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.343397  295952 pod_ready.go:94] pod "kube-apiserver-addons-006674" is "Ready"
	I1018 09:33:23.343431  295952 pod_ready.go:86] duration metric: took 4.937166ms for pod "kube-apiserver-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.346323  295952 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.722541  295952 pod_ready.go:94] pod "kube-controller-manager-addons-006674" is "Ready"
	I1018 09:33:23.722574  295952 pod_ready.go:86] duration metric: took 376.224452ms for pod "kube-controller-manager-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.923288  295952 pod_ready.go:83] waiting for pod "kube-proxy-k5bfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:24.322983  295952 pod_ready.go:94] pod "kube-proxy-k5bfv" is "Ready"
	I1018 09:33:24.323020  295952 pod_ready.go:86] duration metric: took 399.703074ms for pod "kube-proxy-k5bfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:24.522946  295952 pod_ready.go:83] waiting for pod "kube-scheduler-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:24.923376  295952 pod_ready.go:94] pod "kube-scheduler-addons-006674" is "Ready"
	I1018 09:33:24.923404  295952 pod_ready.go:86] duration metric: took 400.428455ms for pod "kube-scheduler-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:24.923416  295952 pod_ready.go:40] duration metric: took 1.604591127s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:24.994791  295952 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:33:24.998663  295952 out.go:179] * Done! kubectl is now configured to use "addons-006674" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:36:36 addons-006674 crio[829]: time="2025-10-18T09:36:36.708729864Z" level=info msg="Removed container 0cffe982d3755c18b1c3f7c4c2ff475d8a6dc228e13088de3ee4e30e474d465f: kube-system/registry-creds-764b6fb674-tjsdw/registry-creds" id=28a24996-ef0b-42b1-9c20-b286063e8d76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.158387946Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-w2lns/POD" id=3627a649-1375-4d64-8b26-dc746b1ed56e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.158488897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.168485237Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-w2lns Namespace:default ID:71ecc1210cee8f5b90c88a0b3e26ee01fcf014c75a873f59c27a4dc5dc7cc953 UID:5813a277-3621-4c4e-b47c-e0a5e7c32705 NetNS:/var/run/netns/ea4ece2e-7d32-4d5a-ae80-1474ddf01f99 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d33208}] Aliases:map[]}"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.16865842Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-w2lns to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.191256913Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-w2lns Namespace:default ID:71ecc1210cee8f5b90c88a0b3e26ee01fcf014c75a873f59c27a4dc5dc7cc953 UID:5813a277-3621-4c4e-b47c-e0a5e7c32705 NetNS:/var/run/netns/ea4ece2e-7d32-4d5a-ae80-1474ddf01f99 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d33208}] Aliases:map[]}"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.191699931Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-w2lns for CNI network kindnet (type=ptp)"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.194651511Z" level=info msg="Ran pod sandbox 71ecc1210cee8f5b90c88a0b3e26ee01fcf014c75a873f59c27a4dc5dc7cc953 with infra container: default/hello-world-app-5d498dc89-w2lns/POD" id=3627a649-1375-4d64-8b26-dc746b1ed56e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.203815225Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f1c13179-9392-40aa-adb5-a5dd7a8eb61c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.204041136Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=f1c13179-9392-40aa-adb5-a5dd7a8eb61c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.204093158Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=f1c13179-9392-40aa-adb5-a5dd7a8eb61c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.206699799Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=ec657a7a-f24b-4fdb-93bd-64a5a398ef3a name=/runtime.v1.ImageService/PullImage
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.208506258Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.80910247Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=ec657a7a-f24b-4fdb-93bd-64a5a398ef3a name=/runtime.v1.ImageService/PullImage
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.809932323Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5e9fb4ec-8bec-4b55-9dde-46e4f728afc5 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.8152427Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7b541840-da06-4b8d-996a-b81b9debbf87 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.824823076Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-w2lns/hello-world-app" id=f6f86b07-f973-46ff-b641-503ff0208e3c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.8259306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.836664695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.842880005Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/42cee674c46081c81c825b2102c0c7fe35ee1cca1fe670044048ac63ba0c97d3/merged/etc/passwd: no such file or directory"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.843075506Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/42cee674c46081c81c825b2102c0c7fe35ee1cca1fe670044048ac63ba0c97d3/merged/etc/group: no such file or directory"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.843443707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.864760509Z" level=info msg="Created container b757e56a28c1f03e354b0c279097f6096e9b9a3a6757f52bd154830f224a36b2: default/hello-world-app-5d498dc89-w2lns/hello-world-app" id=f6f86b07-f973-46ff-b641-503ff0208e3c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.867223285Z" level=info msg="Starting container: b757e56a28c1f03e354b0c279097f6096e9b9a3a6757f52bd154830f224a36b2" id=fa40fcdc-693b-4d49-8e09-335ea85941c0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:36:49 addons-006674 crio[829]: time="2025-10-18T09:36:49.875081742Z" level=info msg="Started container" PID=7317 containerID=b757e56a28c1f03e354b0c279097f6096e9b9a3a6757f52bd154830f224a36b2 description=default/hello-world-app-5d498dc89-w2lns/hello-world-app id=fa40fcdc-693b-4d49-8e09-335ea85941c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71ecc1210cee8f5b90c88a0b3e26ee01fcf014c75a873f59c27a4dc5dc7cc953
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	b757e56a28c1f       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   71ecc1210cee8       hello-world-app-5d498dc89-w2lns             default
	3fe710fabe2e7       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             14 seconds ago           Exited              registry-creds                           2                   f2435e0e53443       registry-creds-764b6fb674-tjsdw             kube-system
	0a3141147543b       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   a83d833ec95eb       nginx                                       default
	f68068950f0ae       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   1ab132f027cab       busybox                                     default
	92482b56ebf75       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   057bb08f47fad       csi-hostpathplugin-rswxb                    kube-system
	bb97628e9a17a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   d93a4f2b7e8f1       gcp-auth-78565c9fb4-m69wg                   gcp-auth
	2668cbad9c190       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   057bb08f47fad       csi-hostpathplugin-rswxb                    kube-system
	3868b4ac74b7b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   057bb08f47fad       csi-hostpathplugin-rswxb                    kube-system
	03c9c979a54ef       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   057bb08f47fad       csi-hostpathplugin-rswxb                    kube-system
	9d6a5e7844b19       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   057bb08f47fad       csi-hostpathplugin-rswxb                    kube-system
	a1e4b9d5843fa       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   43228e15490f7       ingress-nginx-controller-675c5ddd98-fjw9h   ingress-nginx
	c33cff2bf27b1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   6974073f293e4       gadget-77zfw                                gadget
	025d3e64c63bd       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   1b7c0aeaf614c       registry-proxy-46rp2                        kube-system
	e66aaf86ae284       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   ea082e8f2f885       csi-hostpath-resizer-0                      kube-system
	fc5f92cc54e39       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   6f5c11abd95bb       snapshot-controller-7d9fbc56b8-rfbdb        kube-system
	4ed69c6d109cc       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   9d84459635af9       metrics-server-85b7d694d7-szvm5             kube-system
	442597e183407       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   057bb08f47fad       csi-hostpathplugin-rswxb                    kube-system
	54b6974a01255       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   8c524d7e32519       registry-6b586f9694-flkkz                   kube-system
	0a966a2cc2562       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   5e1a17f615416       local-path-provisioner-648f6765c9-848bh     local-path-storage
	d7a1cd7ba1844       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   49db078e1621a       kube-ingress-dns-minikube                   kube-system
	59dad7f7df257       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             4 minutes ago            Exited              patch                                    2                   e44195682fefd       ingress-nginx-admission-patch-rlk7w         ingress-nginx
	7c2c142c2c3e7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   0b5298e8c04ff       ingress-nginx-admission-create-zp84p        ingress-nginx
	1aec9843e6b35       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     4 minutes ago            Running             nvidia-device-plugin-ctr                 0                   3636cd0863c74       nvidia-device-plugin-daemonset-j658f        kube-system
	fdaf99bae646f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   6720a28ae23c3       csi-hostpath-attacher-0                     kube-system
	26c85f4442390       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   50196ba9e7bb6       yakd-dashboard-5ff678cb9-zncv4              yakd-dashboard
	faa7882723437       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   54e1ed93b01e7       snapshot-controller-7d9fbc56b8-9pgqt        kube-system
	94c0691b24391       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   3ecc2127e7c4e       cloud-spanner-emulator-86bd5cbb97-ld2w5     default
	8ba1ab4998b33       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   0d8f1f49f68b9       storage-provisioner                         kube-system
	7a4cd51451e05       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   bf1e3789f3182       coredns-66bc5c9577-kj5jb                    kube-system
	ee39b4a9868c7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   d08071cc6edec       kube-proxy-k5bfv                            kube-system
	6864cc8c9035c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   3c55ceb1a3be3       kindnet-h49vl                               kube-system
	7c7055bef3a7a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   4b50b5a00e3f5       kube-apiserver-addons-006674                kube-system
	265553ed8d31e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   6cbf64983ec75       kube-scheduler-addons-006674                kube-system
	218e3162f40e7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   a7b0f6f557485       etcd-addons-006674                          kube-system
	ca64f5775c712       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   3c2c7b7a2927b       kube-controller-manager-addons-006674       kube-system
	
	
	==> coredns [7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08] <==
	[INFO] 10.244.0.9:48205 - 59177 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003122478s
	[INFO] 10.244.0.9:48205 - 41901 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000097349s
	[INFO] 10.244.0.9:48205 - 31786 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000092622s
	[INFO] 10.244.0.9:39720 - 56159 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131451s
	[INFO] 10.244.0.9:39720 - 55913 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00015838s
	[INFO] 10.244.0.9:54791 - 31067 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081365s
	[INFO] 10.244.0.9:54791 - 31264 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000127734s
	[INFO] 10.244.0.9:53474 - 1053 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123467s
	[INFO] 10.244.0.9:53474 - 872 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000163337s
	[INFO] 10.244.0.9:57779 - 37705 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001362865s
	[INFO] 10.244.0.9:57779 - 37942 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001395621s
	[INFO] 10.244.0.9:51830 - 52679 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121481s
	[INFO] 10.244.0.9:51830 - 52852 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000614928s
	[INFO] 10.244.0.21:48710 - 33868 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000161638s
	[INFO] 10.244.0.21:36211 - 54023 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000157921s
	[INFO] 10.244.0.21:34339 - 25166 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117788s
	[INFO] 10.244.0.21:51831 - 11352 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127922s
	[INFO] 10.244.0.21:34666 - 47358 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092311s
	[INFO] 10.244.0.21:40122 - 10564 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082054s
	[INFO] 10.244.0.21:40639 - 26495 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002174058s
	[INFO] 10.244.0.21:41310 - 11597 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002413386s
	[INFO] 10.244.0.21:44454 - 13966 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00243069s
	[INFO] 10.244.0.21:39046 - 33723 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002488235s
	[INFO] 10.244.0.23:40860 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000260726s
	[INFO] 10.244.0.23:47996 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145326s
	
	
	==> describe nodes <==
	Name:               addons-006674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-006674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=addons-006674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_31_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-006674
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-006674"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:31:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-006674
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:36:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:36:30 +0000   Sat, 18 Oct 2025 09:31:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:36:30 +0000   Sat, 18 Oct 2025 09:31:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:36:30 +0000   Sat, 18 Oct 2025 09:31:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:36:30 +0000   Sat, 18 Oct 2025 09:32:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-006674
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                5f95656e-8cd5-4065-8611-2240f79f89f6
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  default                     cloud-spanner-emulator-86bd5cbb97-ld2w5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  default                     hello-world-app-5d498dc89-w2lns              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-77zfw                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  gcp-auth                    gcp-auth-78565c9fb4-m69wg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-fjw9h    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m17s
	  kube-system                 coredns-66bc5c9577-kj5jb                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m22s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 csi-hostpathplugin-rswxb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 etcd-addons-006674                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m27s
	  kube-system                 kindnet-h49vl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m22s
	  kube-system                 kube-apiserver-addons-006674                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-controller-manager-addons-006674        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-k5bfv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-scheduler-addons-006674                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 metrics-server-85b7d694d7-szvm5              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m18s
	  kube-system                 nvidia-device-plugin-daemonset-j658f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 registry-6b586f9694-flkkz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 registry-creds-764b6fb674-tjsdw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 registry-proxy-46rp2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 snapshot-controller-7d9fbc56b8-9pgqt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-rfbdb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  local-path-storage          local-path-provisioner-648f6765c9-848bh      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-zncv4               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m20s  kube-proxy       
	  Normal   Starting                 5m28s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m28s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m27s  kubelet          Node addons-006674 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m27s  kubelet          Node addons-006674 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m27s  kubelet          Node addons-006674 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m23s  node-controller  Node addons-006674 event: Registered Node addons-006674 in Controller
	  Normal   NodeReady                4m41s  kubelet          Node addons-006674 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015604] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504512] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034321] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.754127] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.006986] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 08:37] hrtimer: interrupt took 52245394 ns
	[Oct18 08:40] FS-Cache: Duplicate cookie detected
	[  +0.000820] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=0000000012c02099{9P.session} n=0000000039d56c98
	[  +0.001191] FS-Cache: O-key=[10] '34323935323339393835'
	[  +0.000847] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001040] FS-Cache: N-cookie d=0000000012c02099{9P.session} n=00000000aa671ad4
	[  +0.001145] FS-Cache: N-key=[10] '34323935323339393835'
	[Oct18 09:29] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[  +0.081210] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24] <==
	{"level":"warn","ts":"2025-10-18T09:31:18.819004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.835226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.887677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.895827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.904764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.921368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.942678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.955514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.977084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.997049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.008106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.029415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.042537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.066881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.083887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.113876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.141356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.150430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.254324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:34.753297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:34.771974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:57.089047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:57.103080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:57.151868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:57.168947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [bb97628e9a17acaa4875c83f17618a29d2209fb42d631e76e614a0da27d47629] <==
	2025/10/18 09:33:17 GCP Auth Webhook started!
	2025/10/18 09:33:25 Ready to marshal response ...
	2025/10/18 09:33:25 Ready to write response ...
	2025/10/18 09:33:26 Ready to marshal response ...
	2025/10/18 09:33:26 Ready to write response ...
	2025/10/18 09:33:26 Ready to marshal response ...
	2025/10/18 09:33:26 Ready to write response ...
	2025/10/18 09:33:46 Ready to marshal response ...
	2025/10/18 09:33:46 Ready to write response ...
	2025/10/18 09:33:46 Ready to marshal response ...
	2025/10/18 09:33:46 Ready to write response ...
	2025/10/18 09:33:59 Ready to marshal response ...
	2025/10/18 09:33:59 Ready to write response ...
	2025/10/18 09:33:59 Ready to marshal response ...
	2025/10/18 09:33:59 Ready to write response ...
	2025/10/18 09:34:07 Ready to marshal response ...
	2025/10/18 09:34:07 Ready to write response ...
	2025/10/18 09:34:18 Ready to marshal response ...
	2025/10/18 09:34:18 Ready to write response ...
	2025/10/18 09:34:27 Ready to marshal response ...
	2025/10/18 09:34:27 Ready to write response ...
	2025/10/18 09:36:48 Ready to marshal response ...
	2025/10/18 09:36:48 Ready to write response ...
	
	
	==> kernel <==
	 09:36:51 up  1:19,  0 user,  load average: 1.06, 1.76, 2.49
	Linux addons-006674 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6] <==
	I1018 09:34:49.019164       1 main.go:301] handling current node
	I1018 09:34:59.019871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:34:59.019927       1 main.go:301] handling current node
	I1018 09:35:09.024499       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:35:09.024533       1 main.go:301] handling current node
	I1018 09:35:19.026259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:35:19.026357       1 main.go:301] handling current node
	I1018 09:35:29.018493       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:35:29.018601       1 main.go:301] handling current node
	I1018 09:35:39.022506       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:35:39.022538       1 main.go:301] handling current node
	I1018 09:35:49.021513       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:35:49.021547       1 main.go:301] handling current node
	I1018 09:35:59.018699       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:35:59.018814       1 main.go:301] handling current node
	I1018 09:36:09.024457       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:36:09.024491       1 main.go:301] handling current node
	I1018 09:36:19.027667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:36:19.027696       1 main.go:301] handling current node
	I1018 09:36:29.021420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:36:29.021477       1 main.go:301] handling current node
	I1018 09:36:39.020461       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:36:39.020497       1 main.go:301] handling current node
	I1018 09:36:49.019771       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:36:49.019892       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f] <==
	E1018 09:32:33.188873       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 09:32:33.188884       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 09:32:33.190588       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 09:32:33.190621       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 09:32:33.190636       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1018 09:32:54.741127       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.130.149:443: connect: connection refused" logger="UnhandledError"
	W1018 09:32:54.741262       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 09:32:54.741325       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 09:32:54.742064       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.130.149:443: connect: connection refused" logger="UnhandledError"
	E1018 09:32:54.747517       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.130.149:443: connect: connection refused" logger="UnhandledError"
	E1018 09:32:54.769311       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.130.149:443: connect: connection refused" logger="UnhandledError"
	I1018 09:32:54.921213       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 09:33:35.350876       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37626: use of closed network connection
	E1018 09:33:35.576413       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37656: use of closed network connection
	E1018 09:33:35.716989       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37684: use of closed network connection
	I1018 09:33:59.826286       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1018 09:34:27.508823       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 09:34:27.802262       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.31.187"}
	I1018 09:36:49.024141       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.44.156"}
	
	
	==> kube-controller-manager [ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c] <==
	I1018 09:31:27.068099       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:31:27.076250       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:31:27.076524       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:31:27.079802       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-006674" podCIDRs=["10.244.0.0/24"]
	I1018 09:31:27.080271       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:31:27.088781       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:31:27.090092       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:31:27.096379       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:31:27.097413       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:31:27.108319       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:31:27.110558       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:31:27.110573       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:31:27.110588       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:31:27.111691       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	E1018 09:31:32.312820       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 09:31:57.081578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 09:31:57.081828       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 09:31:57.081888       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 09:31:57.114886       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 09:31:57.143404       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 09:31:57.183654       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:31:57.244446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:32:12.032720       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1018 09:32:27.191364       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 09:32:27.252498       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc] <==
	I1018 09:31:30.374194       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:31:30.495609       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:31:30.596630       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:31:30.596670       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 09:31:30.596734       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:31:30.667799       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:31:30.667848       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:31:30.680465       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:31:30.680745       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:31:30.680765       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:31:30.684052       1 config.go:200] "Starting service config controller"
	I1018 09:31:30.684064       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:31:30.684080       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:31:30.684085       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:31:30.684096       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:31:30.684099       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:31:30.684749       1 config.go:309] "Starting node config controller"
	I1018 09:31:30.684756       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:31:30.684761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:31:30.787708       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:31:30.787747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:31:30.787788       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6] <==
	E1018 09:31:20.119759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:31:20.120049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:31:20.120216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:31:20.120277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:31:20.121794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:31:20.122236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:31:20.122433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:31:20.122538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:31:20.122703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:31:20.122826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:31:20.123113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:31:20.123300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:31:21.015339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:31:21.021902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:31:21.060217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:31:21.087464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:31:21.202723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:31:21.236926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:31:21.279066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:31:21.306705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:31:21.319493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:31:21.342332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 09:31:21.346901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:31:21.387833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1018 09:31:23.905086       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:36:19 addons-006674 kubelet[1287]: W1018 09:36:19.673767    1287 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/crio-f2435e0e53443690af0fc0c7331e4629628f6c1296aa8841755c8cc6f5962932 WatchSource:0}: Error finding container f2435e0e53443690af0fc0c7331e4629628f6c1296aa8841755c8cc6f5962932: Status 404 returned error can't find the container with id f2435e0e53443690af0fc0c7331e4629628f6c1296aa8841755c8cc6f5962932
	Oct 18 09:36:21 addons-006674 kubelet[1287]: I1018 09:36:21.620370    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-tjsdw" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:36:21 addons-006674 kubelet[1287]: I1018 09:36:21.620436    1287 scope.go:117] "RemoveContainer" containerID="743d26fb9601f36d3ddd3e440265dca23f23800c5a7bd78907359a919c74a79d"
	Oct 18 09:36:21 addons-006674 kubelet[1287]: I1018 09:36:21.645977    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=112.099277284 podStartE2EDuration="1m54.645951s" podCreationTimestamp="2025-10-18 09:34:27 +0000 UTC" firstStartedPulling="2025-10-18 09:34:28.090387648 +0000 UTC m=+185.185315267" lastFinishedPulling="2025-10-18 09:34:30.637061364 +0000 UTC m=+187.731988983" observedRunningTime="2025-10-18 09:34:31.252234647 +0000 UTC m=+188.347162266" watchObservedRunningTime="2025-10-18 09:36:21.645951 +0000 UTC m=+298.740878619"
	Oct 18 09:36:22 addons-006674 kubelet[1287]: I1018 09:36:22.634675    1287 scope.go:117] "RemoveContainer" containerID="743d26fb9601f36d3ddd3e440265dca23f23800c5a7bd78907359a919c74a79d"
	Oct 18 09:36:22 addons-006674 kubelet[1287]: I1018 09:36:22.635530    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-tjsdw" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:36:22 addons-006674 kubelet[1287]: I1018 09:36:22.635665    1287 scope.go:117] "RemoveContainer" containerID="0cffe982d3755c18b1c3f7c4c2ff475d8a6dc228e13088de3ee4e30e474d465f"
	Oct 18 09:36:22 addons-006674 kubelet[1287]: E1018 09:36:22.635921    1287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-tjsdw_kube-system(23cd49e2-ec97-44a9-9bd9-370ba2b403c4)\"" pod="kube-system/registry-creds-764b6fb674-tjsdw" podUID="23cd49e2-ec97-44a9-9bd9-370ba2b403c4"
	Oct 18 09:36:23 addons-006674 kubelet[1287]: I1018 09:36:23.640208    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-tjsdw" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:36:23 addons-006674 kubelet[1287]: I1018 09:36:23.640266    1287 scope.go:117] "RemoveContainer" containerID="0cffe982d3755c18b1c3f7c4c2ff475d8a6dc228e13088de3ee4e30e474d465f"
	Oct 18 09:36:23 addons-006674 kubelet[1287]: E1018 09:36:23.640415    1287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-tjsdw_kube-system(23cd49e2-ec97-44a9-9bd9-370ba2b403c4)\"" pod="kube-system/registry-creds-764b6fb674-tjsdw" podUID="23cd49e2-ec97-44a9-9bd9-370ba2b403c4"
	Oct 18 09:36:36 addons-006674 kubelet[1287]: I1018 09:36:36.025574    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-tjsdw" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:36:36 addons-006674 kubelet[1287]: I1018 09:36:36.025646    1287 scope.go:117] "RemoveContainer" containerID="0cffe982d3755c18b1c3f7c4c2ff475d8a6dc228e13088de3ee4e30e474d465f"
	Oct 18 09:36:36 addons-006674 kubelet[1287]: I1018 09:36:36.686639    1287 scope.go:117] "RemoveContainer" containerID="0cffe982d3755c18b1c3f7c4c2ff475d8a6dc228e13088de3ee4e30e474d465f"
	Oct 18 09:36:36 addons-006674 kubelet[1287]: I1018 09:36:36.686926    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-tjsdw" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:36:36 addons-006674 kubelet[1287]: I1018 09:36:36.687146    1287 scope.go:117] "RemoveContainer" containerID="3fe710fabe2e7b4d0145449bd9541fdadb4526285c669276ae6b15a010176c29"
	Oct 18 09:36:36 addons-006674 kubelet[1287]: E1018 09:36:36.687589    1287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-tjsdw_kube-system(23cd49e2-ec97-44a9-9bd9-370ba2b403c4)\"" pod="kube-system/registry-creds-764b6fb674-tjsdw" podUID="23cd49e2-ec97-44a9-9bd9-370ba2b403c4"
	Oct 18 09:36:38 addons-006674 kubelet[1287]: I1018 09:36:38.024770    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-46rp2" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:36:40 addons-006674 kubelet[1287]: I1018 09:36:40.024943    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-flkkz" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:36:48 addons-006674 kubelet[1287]: I1018 09:36:48.025155    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-tjsdw" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:36:48 addons-006674 kubelet[1287]: I1018 09:36:48.025247    1287 scope.go:117] "RemoveContainer" containerID="3fe710fabe2e7b4d0145449bd9541fdadb4526285c669276ae6b15a010176c29"
	Oct 18 09:36:48 addons-006674 kubelet[1287]: E1018 09:36:48.025425    1287 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-tjsdw_kube-system(23cd49e2-ec97-44a9-9bd9-370ba2b403c4)\"" pod="kube-system/registry-creds-764b6fb674-tjsdw" podUID="23cd49e2-ec97-44a9-9bd9-370ba2b403c4"
	Oct 18 09:36:48 addons-006674 kubelet[1287]: I1018 09:36:48.920131    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5813a277-3621-4c4e-b47c-e0a5e7c32705-gcp-creds\") pod \"hello-world-app-5d498dc89-w2lns\" (UID: \"5813a277-3621-4c4e-b47c-e0a5e7c32705\") " pod="default/hello-world-app-5d498dc89-w2lns"
	Oct 18 09:36:48 addons-006674 kubelet[1287]: I1018 09:36:48.920220    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srdh6\" (UniqueName: \"kubernetes.io/projected/5813a277-3621-4c4e-b47c-e0a5e7c32705-kube-api-access-srdh6\") pod \"hello-world-app-5d498dc89-w2lns\" (UID: \"5813a277-3621-4c4e-b47c-e0a5e7c32705\") " pod="default/hello-world-app-5d498dc89-w2lns"
	Oct 18 09:36:50 addons-006674 kubelet[1287]: I1018 09:36:50.754447    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-w2lns" podStartSLOduration=2.147389128 podStartE2EDuration="2.754428037s" podCreationTimestamp="2025-10-18 09:36:48 +0000 UTC" firstStartedPulling="2025-10-18 09:36:49.20437256 +0000 UTC m=+326.299300179" lastFinishedPulling="2025-10-18 09:36:49.811411469 +0000 UTC m=+326.906339088" observedRunningTime="2025-10-18 09:36:50.754324641 +0000 UTC m=+327.849252268" watchObservedRunningTime="2025-10-18 09:36:50.754428037 +0000 UTC m=+327.849355664"
	
	
	==> storage-provisioner [8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744] <==
	W1018 09:36:26.286767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:28.289681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:28.294286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:30.296893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:30.302085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:32.305979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:32.312658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:34.316317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:34.320648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:36.324222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:36.329095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:38.331901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:38.336538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:40.339007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:40.343546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:42.346923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:42.353832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:44.357241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:44.361633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:46.364135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:46.368436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:48.371944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:48.376403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:50.380106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:50.387485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006674 -n addons-006674
helpers_test.go:269: (dbg) Run:  kubectl --context addons-006674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-zp84p ingress-nginx-admission-patch-rlk7w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-006674 describe pod ingress-nginx-admission-create-zp84p ingress-nginx-admission-patch-rlk7w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-006674 describe pod ingress-nginx-admission-create-zp84p ingress-nginx-admission-patch-rlk7w: exit status 1 (115.825781ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zp84p" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rlk7w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-006674 describe pod ingress-nginx-admission-create-zp84p ingress-nginx-admission-patch-rlk7w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (315.394067ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:36:52.280848  305647 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:36:52.283289  305647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:36:52.283313  305647 out.go:374] Setting ErrFile to fd 2...
	I1018 09:36:52.283320  305647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:36:52.283640  305647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:36:52.283949  305647 mustload.go:65] Loading cluster: addons-006674
	I1018 09:36:52.284321  305647 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:36:52.284331  305647 addons.go:606] checking whether the cluster is paused
	I1018 09:36:52.284429  305647 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:36:52.284444  305647 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:36:52.284875  305647 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:36:52.307079  305647 ssh_runner.go:195] Run: systemctl --version
	I1018 09:36:52.307139  305647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:36:52.338625  305647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:36:52.447374  305647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:36:52.447451  305647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:36:52.480816  305647 cri.go:89] found id: "3fe710fabe2e7b4d0145449bd9541fdadb4526285c669276ae6b15a010176c29"
	I1018 09:36:52.480838  305647 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:36:52.480843  305647 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:36:52.480847  305647 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:36:52.480850  305647 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:36:52.480854  305647 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:36:52.480857  305647 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:36:52.480860  305647 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:36:52.480863  305647 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:36:52.480869  305647 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:36:52.480872  305647 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:36:52.480875  305647 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:36:52.480878  305647 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:36:52.480882  305647 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:36:52.480885  305647 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:36:52.480891  305647 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:36:52.480894  305647 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:36:52.480898  305647 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:36:52.480901  305647 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:36:52.480905  305647 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:36:52.480910  305647 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:36:52.480916  305647 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:36:52.480919  305647 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:36:52.480922  305647 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:36:52.480933  305647 cri.go:89] found id: ""
	I1018 09:36:52.480981  305647 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:36:52.496620  305647 out.go:203] 
	W1018 09:36:52.499507  305647 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:36:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:36:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:36:52.499527  305647 out.go:285] * 
	* 
	W1018 09:36:52.505946  305647 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:36:52.508808  305647 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable ingress --alsologtostderr -v=1: exit status 11 (256.283714ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:36:52.564831  305760 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:36:52.565666  305760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:36:52.565680  305760 out.go:374] Setting ErrFile to fd 2...
	I1018 09:36:52.565685  305760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:36:52.565943  305760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:36:52.566248  305760 mustload.go:65] Loading cluster: addons-006674
	I1018 09:36:52.566671  305760 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:36:52.566688  305760 addons.go:606] checking whether the cluster is paused
	I1018 09:36:52.566792  305760 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:36:52.566811  305760 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:36:52.567267  305760 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:36:52.584699  305760 ssh_runner.go:195] Run: systemctl --version
	I1018 09:36:52.584760  305760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:36:52.602201  305760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:36:52.707519  305760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:36:52.707601  305760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:36:52.737480  305760 cri.go:89] found id: "3fe710fabe2e7b4d0145449bd9541fdadb4526285c669276ae6b15a010176c29"
	I1018 09:36:52.737502  305760 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:36:52.737508  305760 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:36:52.737512  305760 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:36:52.737515  305760 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:36:52.737519  305760 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:36:52.737522  305760 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:36:52.737525  305760 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:36:52.737528  305760 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:36:52.737534  305760 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:36:52.737537  305760 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:36:52.737541  305760 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:36:52.737545  305760 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:36:52.737548  305760 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:36:52.737551  305760 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:36:52.737555  305760 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:36:52.737561  305760 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:36:52.737567  305760 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:36:52.737570  305760 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:36:52.737573  305760 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:36:52.737578  305760 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:36:52.737581  305760 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:36:52.737584  305760 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:36:52.737588  305760 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:36:52.737595  305760 cri.go:89] found id: ""
	I1018 09:36:52.737645  305760 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:36:52.753522  305760 out.go:203] 
	W1018 09:36:52.756532  305760 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:36:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:36:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:36:52.756569  305760 out.go:285] * 
	* 
	W1018 09:36:52.763056  305760 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:36:52.766125  305760 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.59s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-77zfw" [f1c5d73d-de38-4fe1-bcc9-baa233cf4c95] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003325117s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (271.250955ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:22.599656  303786 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:22.600431  303786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:22.600446  303786 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:22.600451  303786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:22.600713  303786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:34:22.601010  303786 mustload.go:65] Loading cluster: addons-006674
	I1018 09:34:22.602276  303786 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:22.602308  303786 addons.go:606] checking whether the cluster is paused
	I1018 09:34:22.602434  303786 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:22.602458  303786 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:34:22.602919  303786 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:34:22.621160  303786 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:22.621281  303786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:34:22.638108  303786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:34:22.740728  303786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:22.740826  303786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:22.784429  303786 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:34:22.784500  303786 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:34:22.784519  303786 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:34:22.784546  303786 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:34:22.784564  303786 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:34:22.784583  303786 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:34:22.784602  303786 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:34:22.784627  303786 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:34:22.784646  303786 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:34:22.784667  303786 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:34:22.784685  303786 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:34:22.784709  303786 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:34:22.784726  303786 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:34:22.784743  303786 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:34:22.784762  303786 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:34:22.784795  303786 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:34:22.784828  303786 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:34:22.784849  303786 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:34:22.784874  303786 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:34:22.784893  303786 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:34:22.784913  303786 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:34:22.784929  303786 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:34:22.784955  303786 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:34:22.784972  303786 cri.go:89] found id: ""
	I1018 09:34:22.785039  303786 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:22.800751  303786 out.go:203] 
	W1018 09:34:22.803605  303786 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:22.803634  303786 out.go:285] * 
	* 
	W1018 09:34:22.810353  303786 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:22.813356  303786 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.644408ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-szvm5" [7f1c3285-5e41-444e-a773-3a86f80ec0c9] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003196152s
addons_test.go:463: (dbg) Run:  kubectl --context addons-006674 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (351.411955ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:29.025175  304153 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:29.034400  304153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:29.034423  304153 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:29.034429  304153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:29.034733  304153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:34:29.035182  304153 mustload.go:65] Loading cluster: addons-006674
	I1018 09:34:29.035616  304153 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:29.035638  304153 addons.go:606] checking whether the cluster is paused
	I1018 09:34:29.035751  304153 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:29.035774  304153 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:34:29.036313  304153 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:34:29.060102  304153 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:29.060156  304153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:34:29.084224  304153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:34:29.200259  304153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:29.200353  304153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:29.259822  304153 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:34:29.259846  304153 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:34:29.259850  304153 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:34:29.259853  304153 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:34:29.259856  304153 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:34:29.259860  304153 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:34:29.259863  304153 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:34:29.259866  304153 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:34:29.259869  304153 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:34:29.259876  304153 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:34:29.259880  304153 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:34:29.259883  304153 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:34:29.259886  304153 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:34:29.259890  304153 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:34:29.259893  304153 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:34:29.259902  304153 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:34:29.259906  304153 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:34:29.259910  304153 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:34:29.259913  304153 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:34:29.259916  304153 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:34:29.259921  304153 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:34:29.259924  304153 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:34:29.259928  304153 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:34:29.259931  304153 cri.go:89] found id: ""
	I1018 09:34:29.259982  304153 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:29.282418  304153 out.go:203] 
	W1018 09:34:29.285699  304153 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:29.285730  304153 out.go:285] * 
	* 
	W1018 09:34:29.292210  304153 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:29.296983  304153 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 09:33:42.282375  295193 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 09:33:42.287361  295193 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 09:33:42.287390  295193 kapi.go:107] duration metric: took 5.033211ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.043861ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-006674 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-006674 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c3b12db2-7014-4533-901b-6d1ccae85335] Pending
helpers_test.go:352: "task-pv-pod" [c3b12db2-7014-4533-901b-6d1ccae85335] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c3b12db2-7014-4533-901b-6d1ccae85335] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.008978397s
addons_test.go:572: (dbg) Run:  kubectl --context addons-006674 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-006674 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-006674 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-006674 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-006674 delete pod task-pv-pod: (1.139334212s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-006674 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-006674 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-006674 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d9df0938-bead-420a-9b06-d02ea8ba564d] Pending
helpers_test.go:352: "task-pv-pod-restore" [d9df0938-bead-420a-9b06-d02ea8ba564d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d9df0938-bead-420a-9b06-d02ea8ba564d] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004481408s
addons_test.go:614: (dbg) Run:  kubectl --context addons-006674 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-006674 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-006674 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (274.586108ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:26.691145  303891 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:26.692134  303891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:26.692159  303891 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:26.692164  303891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:26.692556  303891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:34:26.693047  303891 mustload.go:65] Loading cluster: addons-006674
	I1018 09:34:26.693620  303891 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:26.693649  303891 addons.go:606] checking whether the cluster is paused
	I1018 09:34:26.693840  303891 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:26.693858  303891 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:34:26.694457  303891 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:34:26.713491  303891 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:26.713542  303891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:34:26.733313  303891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:34:26.840524  303891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:26.840620  303891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:26.871536  303891 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:34:26.871561  303891 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:34:26.871567  303891 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:34:26.871571  303891 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:34:26.871574  303891 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:34:26.871578  303891 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:34:26.871581  303891 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:34:26.871585  303891 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:34:26.871588  303891 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:34:26.871596  303891 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:34:26.871599  303891 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:34:26.871604  303891 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:34:26.871608  303891 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:34:26.871611  303891 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:34:26.871620  303891 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:34:26.871627  303891 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:34:26.871635  303891 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:34:26.871639  303891 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:34:26.871643  303891 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:34:26.871646  303891 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:34:26.871650  303891 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:34:26.871654  303891 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:34:26.871657  303891 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:34:26.871659  303891 cri.go:89] found id: ""
	I1018 09:34:26.871713  303891 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:26.886328  303891 out.go:203] 
	W1018 09:34:26.889264  303891 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:26.889294  303891 out.go:285] * 
	* 
	W1018 09:34:26.895664  303891 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:26.898740  303891 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (276.857632ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:26.958272  303936 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:26.959517  303936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:26.959536  303936 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:26.959543  303936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:26.959803  303936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:34:26.960246  303936 mustload.go:65] Loading cluster: addons-006674
	I1018 09:34:26.960659  303936 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:26.960702  303936 addons.go:606] checking whether the cluster is paused
	I1018 09:34:26.960866  303936 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:26.960887  303936 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:34:26.961658  303936 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:34:26.979857  303936 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:26.979915  303936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:34:27.000469  303936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:34:27.103925  303936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:27.104011  303936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:27.145672  303936 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:34:27.145695  303936 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:34:27.145700  303936 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:34:27.145704  303936 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:34:27.145707  303936 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:34:27.145710  303936 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:34:27.145713  303936 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:34:27.145716  303936 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:34:27.145720  303936 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:34:27.145729  303936 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:34:27.145733  303936 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:34:27.145736  303936 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:34:27.145739  303936 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:34:27.145742  303936 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:34:27.145745  303936 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:34:27.145751  303936 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:34:27.145755  303936 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:34:27.145761  303936 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:34:27.145765  303936 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:34:27.145767  303936 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:34:27.145772  303936 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:34:27.145776  303936 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:34:27.145779  303936 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:34:27.145781  303936 cri.go:89] found id: ""
	I1018 09:34:27.145835  303936 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:27.162808  303936 out.go:203] 
	W1018 09:34:27.165897  303936 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:27.165976  303936 out.go:285] * 
	* 
	W1018 09:34:27.172388  303936 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:27.175415  303936 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-006674 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-006674 --alsologtostderr -v=1: exit status 11 (271.222246ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:13.175224  303182 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:13.176051  303182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:13.176092  303182 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:13.176115  303182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:13.176425  303182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:34:13.176801  303182 mustload.go:65] Loading cluster: addons-006674
	I1018 09:34:13.177319  303182 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:13.177366  303182 addons.go:606] checking whether the cluster is paused
	I1018 09:34:13.177502  303182 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:13.177537  303182 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:34:13.178113  303182 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:34:13.196337  303182 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:13.196399  303182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:34:13.226818  303182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:34:13.331551  303182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:13.331631  303182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:13.360526  303182 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:34:13.360551  303182 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:34:13.360555  303182 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:34:13.360559  303182 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:34:13.360568  303182 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:34:13.360572  303182 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:34:13.360575  303182 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:34:13.360578  303182 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:34:13.360581  303182 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:34:13.360611  303182 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:34:13.360622  303182 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:34:13.360625  303182 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:34:13.360628  303182 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:34:13.360631  303182 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:34:13.360634  303182 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:34:13.360639  303182 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:34:13.360645  303182 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:34:13.360654  303182 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:34:13.360658  303182 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:34:13.360661  303182 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:34:13.360666  303182 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:34:13.360698  303182 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:34:13.360708  303182 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:34:13.360711  303182 cri.go:89] found id: ""
	I1018 09:34:13.360773  303182 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:13.375623  303182 out.go:203] 
	W1018 09:34:13.378481  303182 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:13.378509  303182 out.go:285] * 
	* 
	W1018 09:34:13.384913  303182 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:13.387892  303182 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-006674 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-006674
helpers_test.go:243: (dbg) docker inspect addons-006674:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c",
	        "Created": "2025-10-18T09:30:55.363190236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296351,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:30:55.429094391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/hosts",
	        "LogPath": "/var/lib/docker/containers/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c-json.log",
	        "Name": "/addons-006674",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-006674:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-006674",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c",
	                "LowerDir": "/var/lib/docker/overlay2/29d18a6b1974dba062c4a5a3e8cc7328dbfa0c44e2c9c6fd83c6843a9a7db9fb-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29d18a6b1974dba062c4a5a3e8cc7328dbfa0c44e2c9c6fd83c6843a9a7db9fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29d18a6b1974dba062c4a5a3e8cc7328dbfa0c44e2c9c6fd83c6843a9a7db9fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29d18a6b1974dba062c4a5a3e8cc7328dbfa0c44e2c9c6fd83c6843a9a7db9fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-006674",
	                "Source": "/var/lib/docker/volumes/addons-006674/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-006674",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-006674",
	                "name.minikube.sigs.k8s.io": "addons-006674",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3bab0fa0bf04f2f8254ca852eaeed22fb804d3deb9d1901bb25f9bf177d20b8b",
	            "SandboxKey": "/var/run/docker/netns/3bab0fa0bf04",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-006674": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:cd:c9:47:b3:08",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fb4a9eff6b0ac61b7987fb82f64010d024279717261b1f1f792e101a365c1e6d",
	                    "EndpointID": "ceb2cb30ade6094597b8eab5237c686483adfa9ab74715d58ec0b51eb3192d35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-006674",
	                        "2a58daa84df6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-006674 -n addons-006674
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-006674 logs -n 25: (1.578642652s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-195254 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-195254   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p download-only-195254                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-195254   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -o=json --download-only -p download-only-370905 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-370905   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p download-only-370905                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-370905   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p download-only-195254                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-195254   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p download-only-370905                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-370905   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ --download-only -p download-docker-724083 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-724083 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ delete  │ -p download-docker-724083                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-724083 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ --download-only -p binary-mirror-816488 --alsologtostderr --binary-mirror http://127.0.0.1:39529 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-816488   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ delete  │ -p binary-mirror-816488                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-816488   │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ addons  │ disable dashboard -p addons-006674                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-006674                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p addons-006674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ addons-006674 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ addons  │ addons-006674 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ addons  │ addons-006674 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ ip      │ addons-006674 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ addons-006674 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ addons  │ addons-006674 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ ssh     │ addons-006674 ssh cat /opt/local-path-provisioner/pvc-a1742402-0986-435b-8326-e21304879a9e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ addons  │ addons-006674 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ addons-006674 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ addons  │ enable headlamp -p addons-006674 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-006674          │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:30:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:30:30.161475  295952 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:30:30.161683  295952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:30.161716  295952 out.go:374] Setting ErrFile to fd 2...
	I1018 09:30:30.161735  295952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:30.162058  295952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:30:30.162593  295952 out.go:368] Setting JSON to false
	I1018 09:30:30.163634  295952 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4381,"bootTime":1760775450,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 09:30:30.163759  295952 start.go:141] virtualization:  
	I1018 09:30:30.167275  295952 out.go:179] * [addons-006674] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:30:30.171150  295952 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:30:30.171271  295952 notify.go:220] Checking for updates...
	I1018 09:30:30.177307  295952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:30:30.180432  295952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:30:30.183450  295952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 09:30:30.186578  295952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:30:30.190445  295952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:30:30.193870  295952 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:30:30.228010  295952 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:30:30.228149  295952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:30.285876  295952 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 09:30:30.276115025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:30.285984  295952 docker.go:318] overlay module found
	I1018 09:30:30.289128  295952 out.go:179] * Using the docker driver based on user configuration
	I1018 09:30:30.291965  295952 start.go:305] selected driver: docker
	I1018 09:30:30.291984  295952 start.go:925] validating driver "docker" against <nil>
	I1018 09:30:30.291998  295952 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:30:30.292741  295952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:30.348656  295952 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 09:30:30.339892562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:30.348821  295952 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:30:30.349052  295952 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:30:30.352065  295952 out.go:179] * Using Docker driver with root privileges
	I1018 09:30:30.354939  295952 cni.go:84] Creating CNI manager for ""
	I1018 09:30:30.355041  295952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:30:30.355058  295952 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:30:30.355138  295952 start.go:349] cluster config:
	{Name:addons-006674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 09:30:30.360112  295952 out.go:179] * Starting "addons-006674" primary control-plane node in "addons-006674" cluster
	I1018 09:30:30.362922  295952 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:30:30.365778  295952 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:30:30.368569  295952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:30:30.368622  295952 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:30:30.368634  295952 cache.go:58] Caching tarball of preloaded images
	I1018 09:30:30.368667  295952 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:30:30.368728  295952 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:30:30.368738  295952 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:30:30.369096  295952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/config.json ...
	I1018 09:30:30.369128  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/config.json: {Name:mka65e5b9d37d2e4b2c1304e163a9cf934b6d64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:30:30.384435  295952 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 09:30:30.384582  295952 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 09:30:30.384602  295952 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 09:30:30.384607  295952 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 09:30:30.384615  295952 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 09:30:30.384620  295952 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 09:30:48.369494  295952 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 09:30:48.369536  295952 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:30:48.369566  295952 start.go:360] acquireMachinesLock for addons-006674: {Name:mk7e4142b1387a9d5103c52b0dd86664f3e789c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:48.369692  295952 start.go:364] duration metric: took 104.643µs to acquireMachinesLock for "addons-006674"
	I1018 09:30:48.369725  295952 start.go:93] Provisioning new machine with config: &{Name:addons-006674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:30:48.369810  295952 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:30:48.373316  295952 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 09:30:48.373563  295952 start.go:159] libmachine.API.Create for "addons-006674" (driver="docker")
	I1018 09:30:48.373608  295952 client.go:168] LocalClient.Create starting
	I1018 09:30:48.373742  295952 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 09:30:48.445303  295952 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 09:30:48.676629  295952 cli_runner.go:164] Run: docker network inspect addons-006674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:30:48.691352  295952 cli_runner.go:211] docker network inspect addons-006674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:30:48.691443  295952 network_create.go:284] running [docker network inspect addons-006674] to gather additional debugging logs...
	I1018 09:30:48.691461  295952 cli_runner.go:164] Run: docker network inspect addons-006674
	W1018 09:30:48.706405  295952 cli_runner.go:211] docker network inspect addons-006674 returned with exit code 1
	I1018 09:30:48.706439  295952 network_create.go:287] error running [docker network inspect addons-006674]: docker network inspect addons-006674: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-006674 not found
	I1018 09:30:48.706451  295952 network_create.go:289] output of [docker network inspect addons-006674]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-006674 not found
	
	** /stderr **
	I1018 09:30:48.706614  295952 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:30:48.722310  295952 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ace720}
	I1018 09:30:48.722358  295952 network_create.go:124] attempt to create docker network addons-006674 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 09:30:48.722414  295952 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-006674 addons-006674
	I1018 09:30:48.779724  295952 network_create.go:108] docker network addons-006674 192.168.49.0/24 created
	I1018 09:30:48.779775  295952 kic.go:121] calculated static IP "192.168.49.2" for the "addons-006674" container
	I1018 09:30:48.779858  295952 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:30:48.795098  295952 cli_runner.go:164] Run: docker volume create addons-006674 --label name.minikube.sigs.k8s.io=addons-006674 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:30:48.812727  295952 oci.go:103] Successfully created a docker volume addons-006674
	I1018 09:30:48.812820  295952 cli_runner.go:164] Run: docker run --rm --name addons-006674-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006674 --entrypoint /usr/bin/test -v addons-006674:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:30:50.926673  295952 cli_runner.go:217] Completed: docker run --rm --name addons-006674-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006674 --entrypoint /usr/bin/test -v addons-006674:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.113817769s)
	I1018 09:30:50.926704  295952 oci.go:107] Successfully prepared a docker volume addons-006674
	I1018 09:30:50.926733  295952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:30:50.926751  295952 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:30:50.926828  295952 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:30:55.295484  295952 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.368614862s)
	I1018 09:30:55.295531  295952 kic.go:203] duration metric: took 4.368760828s to extract preloaded images to volume ...
	W1018 09:30:55.295667  295952 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:30:55.295775  295952 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:30:55.348708  295952 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-006674 --name addons-006674 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006674 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-006674 --network addons-006674 --ip 192.168.49.2 --volume addons-006674:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:30:55.637823  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Running}}
	I1018 09:30:55.663276  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:30:55.687847  295952 cli_runner.go:164] Run: docker exec addons-006674 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:30:55.740533  295952 oci.go:144] the created container "addons-006674" has a running status.
	I1018 09:30:55.740566  295952 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa...
	I1018 09:30:55.986557  295952 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:30:56.015011  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:30:56.036269  295952 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:30:56.036294  295952 kic_runner.go:114] Args: [docker exec --privileged addons-006674 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:30:56.099946  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:30:56.124529  295952 machine.go:93] provisionDockerMachine start ...
	I1018 09:30:56.124639  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:30:56.151345  295952 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:56.151674  295952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1018 09:30:56.151683  295952 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:30:56.152375  295952 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 09:30:59.296950  295952 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006674
	
	I1018 09:30:59.296976  295952 ubuntu.go:182] provisioning hostname "addons-006674"
	I1018 09:30:59.297049  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:30:59.314272  295952 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:59.314588  295952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1018 09:30:59.314607  295952 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-006674 && echo "addons-006674" | sudo tee /etc/hostname
	I1018 09:30:59.470620  295952 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006674
	
	I1018 09:30:59.470780  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:30:59.488321  295952 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:59.488643  295952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1018 09:30:59.488660  295952 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-006674' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-006674/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-006674' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:30:59.637324  295952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:30:59.637350  295952 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 09:30:59.637384  295952 ubuntu.go:190] setting up certificates
	I1018 09:30:59.637395  295952 provision.go:84] configureAuth start
	I1018 09:30:59.637463  295952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006674
	I1018 09:30:59.654336  295952 provision.go:143] copyHostCerts
	I1018 09:30:59.654422  295952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 09:30:59.654565  295952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 09:30:59.654631  295952 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 09:30:59.654682  295952 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.addons-006674 san=[127.0.0.1 192.168.49.2 addons-006674 localhost minikube]
	I1018 09:30:59.992451  295952 provision.go:177] copyRemoteCerts
	I1018 09:30:59.992514  295952 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:30:59.992553  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.009445  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:00.189228  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:31:00.239758  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:31:00.299436  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 09:31:00.366058  295952 provision.go:87] duration metric: took 728.628117ms to configureAuth
	I1018 09:31:00.366155  295952 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:31:00.366402  295952 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:00.366570  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.397451  295952 main.go:141] libmachine: Using SSH client type: native
	I1018 09:31:00.397794  295952 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1018 09:31:00.397820  295952 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:31:00.678380  295952 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:31:00.678404  295952 machine.go:96] duration metric: took 4.553849608s to provisionDockerMachine
	I1018 09:31:00.678415  295952 client.go:171] duration metric: took 12.304796866s to LocalClient.Create
	I1018 09:31:00.678428  295952 start.go:167] duration metric: took 12.304867776s to libmachine.API.Create "addons-006674"
	I1018 09:31:00.678435  295952 start.go:293] postStartSetup for "addons-006674" (driver="docker")
	I1018 09:31:00.678444  295952 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:31:00.678521  295952 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:31:00.678570  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.699319  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:00.805061  295952 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:31:00.808568  295952 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:31:00.808596  295952 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:31:00.808607  295952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 09:31:00.808671  295952 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 09:31:00.808698  295952 start.go:296] duration metric: took 130.257837ms for postStartSetup
	I1018 09:31:00.809020  295952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006674
	I1018 09:31:00.826125  295952 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/config.json ...
	I1018 09:31:00.826422  295952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:31:00.826483  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.843431  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:00.942079  295952 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:31:00.946676  295952 start.go:128] duration metric: took 12.57685091s to createHost
	I1018 09:31:00.946702  295952 start.go:83] releasing machines lock for "addons-006674", held for 12.576995875s
	I1018 09:31:00.946787  295952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006674
	I1018 09:31:00.963814  295952 ssh_runner.go:195] Run: cat /version.json
	I1018 09:31:00.963853  295952 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:31:00.963865  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.963916  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:00.980943  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:01.003233  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:01.195327  295952 ssh_runner.go:195] Run: systemctl --version
	I1018 09:31:01.202260  295952 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:31:01.239202  295952 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:31:01.243928  295952 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:31:01.244057  295952 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:31:01.276247  295952 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 09:31:01.276278  295952 start.go:495] detecting cgroup driver to use...
	I1018 09:31:01.276312  295952 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:31:01.276364  295952 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:31:01.294943  295952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:31:01.312247  295952 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:31:01.312315  295952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:31:01.331027  295952 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:31:01.348988  295952 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:31:01.473033  295952 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:31:01.600798  295952 docker.go:234] disabling docker service ...
	I1018 09:31:01.600894  295952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:31:01.624104  295952 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:31:01.638266  295952 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:31:01.752769  295952 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:31:01.870635  295952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:31:01.883325  295952 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:31:01.897787  295952 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:31:01.897864  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.907304  295952 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:31:01.907375  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.917763  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.926619  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.935282  295952 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:31:01.943326  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.951916  295952 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.965682  295952 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:01.974197  295952 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:31:01.982077  295952 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:31:01.989822  295952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:02.112934  295952 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:31:02.247871  295952 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:31:02.247974  295952 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:31:02.252329  295952 start.go:563] Will wait 60s for crictl version
	I1018 09:31:02.252413  295952 ssh_runner.go:195] Run: which crictl
	I1018 09:31:02.256322  295952 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:31:02.280806  295952 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:31:02.280937  295952 ssh_runner.go:195] Run: crio --version
	I1018 09:31:02.309686  295952 ssh_runner.go:195] Run: crio --version
	I1018 09:31:02.343113  295952 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:31:02.346104  295952 cli_runner.go:164] Run: docker network inspect addons-006674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:31:02.362119  295952 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 09:31:02.366006  295952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:31:02.376160  295952 kubeadm.go:883] updating cluster {Name:addons-006674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:31:02.376277  295952 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:31:02.376332  295952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:31:02.412027  295952 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:31:02.412053  295952 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:31:02.412120  295952 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:31:02.440367  295952 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:31:02.440391  295952 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:31:02.440400  295952 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 09:31:02.440500  295952 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-006674 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:31:02.440586  295952 ssh_runner.go:195] Run: crio config
	I1018 09:31:02.511866  295952 cni.go:84] Creating CNI manager for ""
	I1018 09:31:02.511888  295952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:31:02.511914  295952 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:31:02.511950  295952 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-006674 NodeName:addons-006674 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:31:02.512103  295952 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-006674"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:31:02.512190  295952 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:31:02.520582  295952 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:31:02.520691  295952 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:31:02.528244  295952 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 09:31:02.541597  295952 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:31:02.555139  295952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 09:31:02.567720  295952 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:31:02.571391  295952 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:31:02.581593  295952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:02.698510  295952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:02.720259  295952 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674 for IP: 192.168.49.2
	I1018 09:31:02.720281  295952 certs.go:195] generating shared ca certs ...
	I1018 09:31:02.720306  295952 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:02.721119  295952 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 09:31:03.292157  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt ...
	I1018 09:31:03.292189  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt: {Name:mk8d3f19ca1aa391bbc70a2b3fb9803197d9d701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.293019  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key ...
	I1018 09:31:03.293037  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key: {Name:mk26ad599c66ddda508ce2717b1cda5e0b8014d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.293711  295952 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 09:31:03.559084  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt ...
	I1018 09:31:03.559116  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt: {Name:mka5245d0f3b42eba9e957f4c851d73149e14243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.559308  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key ...
	I1018 09:31:03.559321  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key: {Name:mkf4ae093b4c402caa2df28ffb84d0806b324996 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.560065  295952 certs.go:257] generating profile certs ...
	I1018 09:31:03.560129  295952 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.key
	I1018 09:31:03.560153  295952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt with IP's: []
	I1018 09:31:03.761420  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt ...
	I1018 09:31:03.761458  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: {Name:mk530e18a5da22bc7097f1e016fc5cc1231fa098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.761628  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.key ...
	I1018 09:31:03.761639  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.key: {Name:mk6ae3baf71e5567c2a52f974428d76f0b7e9b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:03.762283  295952 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key.48582f27
	I1018 09:31:03.762305  295952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt.48582f27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 09:31:04.390391  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt.48582f27 ...
	I1018 09:31:04.390423  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt.48582f27: {Name:mk9db4316d0425914f78037d41b1d30d1a01500e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:04.390608  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key.48582f27 ...
	I1018 09:31:04.390621  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key.48582f27: {Name:mk606cfb848bcfa9c19ef33f24a655f24829857f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:04.391376  295952 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt.48582f27 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt
	I1018 09:31:04.391461  295952 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key.48582f27 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key
	I1018 09:31:04.391514  295952 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.key
	I1018 09:31:04.391534  295952 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.crt with IP's: []
	I1018 09:31:05.360879  295952 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.crt ...
	I1018 09:31:05.360914  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.crt: {Name:mk3fe22b2dd523989b85719c2e72c2db16a11387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:05.361772  295952 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.key ...
	I1018 09:31:05.361792  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.key: {Name:mke0d578aad2bbe61ee4a61be81d2337b12f9750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:05.362025  295952 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:31:05.362070  295952 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:31:05.362101  295952 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:31:05.362131  295952 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 09:31:05.362696  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:31:05.382243  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:31:05.401625  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:31:05.420346  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:31:05.438553  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:31:05.456807  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:31:05.474428  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:31:05.492244  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:31:05.510323  295952 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:31:05.528738  295952 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:31:05.541359  295952 ssh_runner.go:195] Run: openssl version
	I1018 09:31:05.547933  295952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:31:05.556706  295952 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:05.560549  295952 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:05.560615  295952 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:05.603241  295952 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:31:05.611577  295952 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:31:05.615443  295952 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:31:05.615538  295952 kubeadm.go:400] StartCluster: {Name:addons-006674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:31:05.615624  295952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:31:05.615683  295952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:31:05.644637  295952 cri.go:89] found id: ""
	I1018 09:31:05.644711  295952 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:31:05.652400  295952 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:31:05.660222  295952 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:31:05.660292  295952 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:31:05.668012  295952 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:31:05.668036  295952 kubeadm.go:157] found existing configuration files:
	
	I1018 09:31:05.668152  295952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:31:05.676179  295952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:31:05.676244  295952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:31:05.684157  295952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:31:05.692759  295952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:31:05.692878  295952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:31:05.700936  295952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:31:05.709535  295952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:31:05.709688  295952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:31:05.717469  295952 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:31:05.726581  295952 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:31:05.726703  295952 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:31:05.734808  295952 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:31:05.776652  295952 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:31:05.776950  295952 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:31:05.800550  295952 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:31:05.800705  295952 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:31:05.800784  295952 kubeadm.go:318] OS: Linux
	I1018 09:31:05.800882  295952 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:31:05.800967  295952 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:31:05.801067  295952 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:31:05.801152  295952 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:31:05.801272  295952 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:31:05.801375  295952 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:31:05.801452  295952 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:31:05.801533  295952 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:31:05.801614  295952 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:31:05.867193  295952 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:31:05.867376  295952 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:31:05.867518  295952 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:31:05.875548  295952 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:31:05.881568  295952 out.go:252]   - Generating certificates and keys ...
	I1018 09:31:05.881731  295952 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:31:05.881848  295952 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:31:06.305660  295952 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:31:06.696223  295952 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:31:06.899310  295952 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:31:07.937952  295952 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:31:08.872734  295952 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:31:08.873014  295952 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-006674 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 09:31:10.974917  295952 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:31:10.975195  295952 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-006674 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 09:31:11.233222  295952 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:31:11.942054  295952 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:31:12.394529  295952 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:31:12.394813  295952 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:31:12.898310  295952 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:31:12.940250  295952 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:31:13.191231  295952 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:31:13.478734  295952 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:31:14.082514  295952 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:31:14.083166  295952 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:31:14.086016  295952 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:31:14.089461  295952 out.go:252]   - Booting up control plane ...
	I1018 09:31:14.089566  295952 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:31:14.089650  295952 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:31:14.090483  295952 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:31:14.106342  295952 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:31:14.106694  295952 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:31:14.114589  295952 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:31:14.114946  295952 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:31:14.115205  295952 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:31:14.245204  295952 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:31:14.245331  295952 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:31:15.746758  295952 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501678606s
	I1018 09:31:15.750557  295952 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:31:15.750787  295952 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 09:31:15.751025  295952 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:31:15.751248  295952 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:31:18.348726  295952 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.596955838s
	I1018 09:31:20.118166  295952 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.366631875s
	I1018 09:31:22.253935  295952 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502844159s
	I1018 09:31:22.273890  295952 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:31:22.289979  295952 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:31:22.304645  295952 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:31:22.304884  295952 kubeadm.go:318] [mark-control-plane] Marking the node addons-006674 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:31:22.321327  295952 kubeadm.go:318] [bootstrap-token] Using token: j44vbg.trsy7q1sq403c6an
	I1018 09:31:22.324840  295952 out.go:252]   - Configuring RBAC rules ...
	I1018 09:31:22.324993  295952 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:31:22.330485  295952 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:31:22.340931  295952 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:31:22.347168  295952 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:31:22.351151  295952 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:31:22.357698  295952 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:31:22.662352  295952 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:31:23.103211  295952 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:31:23.661284  295952 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:31:23.662397  295952 kubeadm.go:318] 
	I1018 09:31:23.662471  295952 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:31:23.662481  295952 kubeadm.go:318] 
	I1018 09:31:23.662563  295952 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:31:23.662571  295952 kubeadm.go:318] 
	I1018 09:31:23.662598  295952 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:31:23.662682  295952 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:31:23.662739  295952 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:31:23.662747  295952 kubeadm.go:318] 
	I1018 09:31:23.662804  295952 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:31:23.662811  295952 kubeadm.go:318] 
	I1018 09:31:23.662861  295952 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:31:23.662868  295952 kubeadm.go:318] 
	I1018 09:31:23.662922  295952 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:31:23.663003  295952 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:31:23.663079  295952 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:31:23.663088  295952 kubeadm.go:318] 
	I1018 09:31:23.663175  295952 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:31:23.663258  295952 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:31:23.663266  295952 kubeadm.go:318] 
	I1018 09:31:23.663353  295952 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token j44vbg.trsy7q1sq403c6an \
	I1018 09:31:23.663465  295952 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 09:31:23.663710  295952 kubeadm.go:318] 	--control-plane 
	I1018 09:31:23.663723  295952 kubeadm.go:318] 
	I1018 09:31:23.663814  295952 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:31:23.663819  295952 kubeadm.go:318] 
	I1018 09:31:23.663915  295952 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token j44vbg.trsy7q1sq403c6an \
	I1018 09:31:23.664024  295952 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 09:31:23.667404  295952 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 09:31:23.667645  295952 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 09:31:23.667761  295952 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:31:23.667781  295952 cni.go:84] Creating CNI manager for ""
	I1018 09:31:23.667789  295952 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:31:23.671006  295952 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:31:23.673881  295952 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:31:23.678086  295952 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:31:23.678107  295952 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:31:23.691303  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:31:23.995467  295952 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:31:23.995596  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:23.995706  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-006674 minikube.k8s.io/updated_at=2025_10_18T09_31_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=addons-006674 minikube.k8s.io/primary=true
	I1018 09:31:24.188796  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:24.188878  295952 ops.go:34] apiserver oom_adj: -16
	I1018 09:31:24.689559  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:25.189776  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:25.689826  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:26.188957  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:26.689542  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:27.189235  295952 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:27.270689  295952 kubeadm.go:1113] duration metric: took 3.275137685s to wait for elevateKubeSystemPrivileges
	I1018 09:31:27.270723  295952 kubeadm.go:402] duration metric: took 21.655190172s to StartCluster
	I1018 09:31:27.270743  295952 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:27.270860  295952 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:31:27.271244  295952 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:27.272096  295952 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:31:27.272239  295952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:31:27.272497  295952 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:27.272540  295952 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 09:31:27.272640  295952 addons.go:69] Setting yakd=true in profile "addons-006674"
	I1018 09:31:27.272673  295952 addons.go:238] Setting addon yakd=true in "addons-006674"
	I1018 09:31:27.272698  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.273229  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.273582  295952 addons.go:69] Setting inspektor-gadget=true in profile "addons-006674"
	I1018 09:31:27.273600  295952 addons.go:238] Setting addon inspektor-gadget=true in "addons-006674"
	I1018 09:31:27.273625  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.273773  295952 addons.go:69] Setting metrics-server=true in profile "addons-006674"
	I1018 09:31:27.273811  295952 addons.go:238] Setting addon metrics-server=true in "addons-006674"
	I1018 09:31:27.273866  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.274027  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.274427  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.276778  295952 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-006674"
	I1018 09:31:27.277377  295952 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-006674"
	I1018 09:31:27.277466  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.276958  295952 addons.go:69] Setting registry=true in profile "addons-006674"
	I1018 09:31:27.278143  295952 addons.go:238] Setting addon registry=true in "addons-006674"
	I1018 09:31:27.278176  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.278602  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.276973  295952 addons.go:69] Setting registry-creds=true in profile "addons-006674"
	I1018 09:31:27.280161  295952 addons.go:238] Setting addon registry-creds=true in "addons-006674"
	I1018 09:31:27.280226  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.280744  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.281487  295952 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-006674"
	I1018 09:31:27.281526  295952 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-006674"
	I1018 09:31:27.281558  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.282098  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.276979  295952 addons.go:69] Setting storage-provisioner=true in profile "addons-006674"
	I1018 09:31:27.290431  295952 addons.go:238] Setting addon storage-provisioner=true in "addons-006674"
	I1018 09:31:27.290479  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.276985  295952 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-006674"
	I1018 09:31:27.276991  295952 addons.go:69] Setting volcano=true in profile "addons-006674"
	I1018 09:31:27.290787  295952 addons.go:238] Setting addon volcano=true in "addons-006674"
	I1018 09:31:27.276997  295952 addons.go:69] Setting volumesnapshots=true in profile "addons-006674"
	I1018 09:31:27.290868  295952 addons.go:238] Setting addon volumesnapshots=true in "addons-006674"
	I1018 09:31:27.290885  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.277076  295952 out.go:179] * Verifying Kubernetes components...
	I1018 09:31:27.297502  295952 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:27.291504  295952 addons.go:69] Setting cloud-spanner=true in profile "addons-006674"
	I1018 09:31:27.297675  295952 addons.go:238] Setting addon cloud-spanner=true in "addons-006674"
	I1018 09:31:27.297789  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.298254  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291516  295952 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-006674"
	I1018 09:31:27.305944  295952 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-006674"
	I1018 09:31:27.305980  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.306444  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291523  295952 addons.go:69] Setting default-storageclass=true in profile "addons-006674"
	I1018 09:31:27.326092  295952 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-006674"
	I1018 09:31:27.326526  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291530  295952 addons.go:69] Setting gcp-auth=true in profile "addons-006674"
	I1018 09:31:27.338541  295952 mustload.go:65] Loading cluster: addons-006674
	I1018 09:31:27.338791  295952 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:27.339103  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291535  295952 addons.go:69] Setting ingress=true in profile "addons-006674"
	I1018 09:31:27.351428  295952 addons.go:238] Setting addon ingress=true in "addons-006674"
	I1018 09:31:27.351504  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.352023  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.291541  295952 addons.go:69] Setting ingress-dns=true in profile "addons-006674"
	I1018 09:31:27.371542  295952 addons.go:238] Setting addon ingress-dns=true in "addons-006674"
	I1018 09:31:27.371651  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.372141  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.292399  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.398657  295952 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 09:31:27.401484  295952 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 09:31:27.401511  295952 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 09:31:27.290721  295952 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-006674"
	I1018 09:31:27.401586  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.401864  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.292409  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.407074  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.292859  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.427670  295952 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 09:31:27.371486  295952 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 09:31:27.291810  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.469263  295952 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 09:31:27.469464  295952 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 09:31:27.496724  295952 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 09:31:27.500124  295952 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 09:31:27.505264  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 09:31:27.505296  295952 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 09:31:27.505365  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.505560  295952 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 09:31:27.505573  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 09:31:27.505617  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.512359  295952 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 09:31:27.512385  295952 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 09:31:27.512458  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.517489  295952 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 09:31:27.517564  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 09:31:27.517667  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.550500  295952 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 09:31:27.550525  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 09:31:27.550598  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.575026  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 09:31:27.596326  295952 addons.go:238] Setting addon default-storageclass=true in "addons-006674"
	I1018 09:31:27.596376  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.596809  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.597863  295952 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 09:31:27.600858  295952 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 09:31:27.600881  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 09:31:27.600948  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.645268  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 09:31:27.648222  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 09:31:27.651063  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 09:31:27.653992  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 09:31:27.654212  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.664115  295952 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 09:31:27.667901  295952 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 09:31:27.667932  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 09:31:27.668034  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.672733  295952 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:27.679776  295952 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-006674"
	I1018 09:31:27.679822  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:27.680241  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:27.657330  295952 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:31:27.682163  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.690523  295952 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 09:31:27.690710  295952 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:27.705407  295952 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 09:31:27.708855  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 09:31:27.711794  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 09:31:27.712839  295952 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 09:31:27.719226  295952 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 09:31:27.719251  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 09:31:27.719321  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.734687  295952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:27.734707  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:31:27.734778  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.737082  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 09:31:27.738524  295952 node_ready.go:35] waiting up to 6m0s for node "addons-006674" to be "Ready" ...
	I1018 09:31:27.754926  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 09:31:27.754949  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 09:31:27.755038  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.774321  295952 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 09:31:27.777782  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 09:31:27.777812  295952 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 09:31:27.777890  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	W1018 09:31:27.790445  295952 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 09:31:27.806393  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.836119  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.838326  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.839132  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.840680  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.841915  295952 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 09:31:27.845077  295952 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 09:31:27.845135  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 09:31:27.845266  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.863739  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.864604  295952 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:27.864625  295952 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:31:27.864680  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.909888  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.921951  295952 out.go:179]   - Using image docker.io/busybox:stable
	I1018 09:31:27.924765  295952 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 09:31:27.929309  295952 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 09:31:27.929333  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 09:31:27.929411  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:27.949238  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.954956  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.955960  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.966824  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:27.996933  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:28.004154  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	W1018 09:31:28.009541  295952 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 09:31:28.009572  295952 retry.go:31] will retry after 363.527676ms: ssh: handshake failed: EOF
	I1018 09:31:28.014604  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	W1018 09:31:28.016085  295952 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 09:31:28.016108  295952 retry.go:31] will retry after 239.787661ms: ssh: handshake failed: EOF
	I1018 09:31:28.146671  295952 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:28.146743  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 09:31:28.372498  295952 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 09:31:28.372570  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 09:31:28.443884  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:28.539511  295952 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 09:31:28.539590  295952 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 09:31:28.607598  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 09:31:28.668563  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 09:31:28.703849  295952 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 09:31:28.703924  295952 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 09:31:28.714046  295952 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 09:31:28.714122  295952 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 09:31:28.728387  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 09:31:28.766733  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 09:31:28.826691  295952 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 09:31:28.826713  295952 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 09:31:28.847833  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 09:31:28.853888  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 09:31:28.880435  295952 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 09:31:28.880505  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 09:31:28.903209  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 09:31:28.903282  295952 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 09:31:28.920130  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 09:31:28.920219  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 09:31:28.942435  295952 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 09:31:28.942513  295952 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 09:31:28.976272  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:29.023455  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 09:31:29.095335  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 09:31:29.096374  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:29.099198  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 09:31:29.099219  295952 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 09:31:29.102514  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 09:31:29.102590  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 09:31:29.144760  295952 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 09:31:29.144835  295952 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 09:31:29.225404  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 09:31:29.304179  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 09:31:29.304203  295952 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 09:31:29.306432  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 09:31:29.306452  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 09:31:29.352677  295952 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.670959873s)
	I1018 09:31:29.352709  295952 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 09:31:29.413137  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 09:31:29.413171  295952 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 09:31:29.522628  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 09:31:29.522653  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 09:31:29.606431  295952 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 09:31:29.606457  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 09:31:29.606744  295952 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 09:31:29.606760  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 09:31:29.697258  295952 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 09:31:29.697284  295952 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	W1018 09:31:29.742189  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:29.857291  295952 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-006674" context rescaled to 1 replicas
	I1018 09:31:29.883535  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 09:31:29.899837  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 09:31:29.929393  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 09:31:29.929417  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 09:31:30.345940  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 09:31:30.345965  295952 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 09:31:30.600651  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 09:31:30.600676  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 09:31:30.892963  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 09:31:30.892991  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 09:31:31.181670  295952 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 09:31:31.181749  295952 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 09:31:31.471779  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1018 09:31:31.758503  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:32.842165  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.3981963s)
	W1018 09:31:32.842196  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:32.842214  295952 retry.go:31] will retry after 343.817823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:32.842264  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.234590933s)
	I1018 09:31:32.842310  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.173677536s)
	I1018 09:31:32.842351  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.113897096s)
	I1018 09:31:32.842541  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.075736885s)
	I1018 09:31:33.186770  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 09:31:33.789097  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:33.883400  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.029434409s)
	I1018 09:31:33.883477  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.907133777s)
	I1018 09:31:33.883505  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.035604487s)
	I1018 09:31:33.883515  295952 addons.go:479] Verifying addon ingress=true in "addons-006674"
	I1018 09:31:33.883623  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.787229814s)
	I1018 09:31:33.883827  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.658349187s)
	I1018 09:31:33.883578  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.860029536s)
	I1018 09:31:33.884107  295952 addons.go:479] Verifying addon metrics-server=true in "addons-006674"
	I1018 09:31:33.883605  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.788192757s)
	I1018 09:31:33.884124  295952 addons.go:479] Verifying addon registry=true in "addons-006674"
	I1018 09:31:33.886837  295952 out.go:179] * Verifying registry addon...
	I1018 09:31:33.886944  295952 out.go:179] * Verifying ingress addon...
	I1018 09:31:33.891383  295952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 09:31:33.892245  295952 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 09:31:33.922498  295952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 09:31:33.922523  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:33.930455  295952 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 09:31:33.930481  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:34.007203  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.123624081s)
	W1018 09:31:34.007243  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 09:31:34.007264  295952 retry.go:31] will retry after 228.333502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 09:31:34.007311  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.10744731s)
	I1018 09:31:34.011298  295952 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-006674 service yakd-dashboard -n yakd-dashboard
	
	I1018 09:31:34.236083  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 09:31:34.402723  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:34.403011  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:34.603454  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.131585359s)
	I1018 09:31:34.603488  295952 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-006674"
	I1018 09:31:34.606654  295952 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 09:31:34.611129  295952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 09:31:34.620750  295952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 09:31:34.620772  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:34.715935  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.529068326s)
	W1018 09:31:34.715969  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:34.715988  295952 retry.go:31] will retry after 345.302342ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:34.896602  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:34.896758  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:35.061968  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:35.115712  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:35.297841  295952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 09:31:35.297970  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:35.320364  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:35.397781  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:35.397845  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:35.438319  295952 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 09:31:35.454598  295952 addons.go:238] Setting addon gcp-auth=true in "addons-006674"
	I1018 09:31:35.454652  295952 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:31:35.455110  295952 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:31:35.474048  295952 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 09:31:35.474107  295952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:31:35.494138  295952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:31:35.615032  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:35.896142  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:35.897019  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:36.114718  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 09:31:36.241283  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:36.395544  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:36.395626  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:36.614730  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:36.897397  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:36.897868  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:37.115136  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:37.147839  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.911678007s)
	I1018 09:31:37.147972  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.085972877s)
	W1018 09:31:37.148010  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:37.148028  295952 retry.go:31] will retry after 743.995265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:37.148029  295952 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.673953142s)
	I1018 09:31:37.151275  295952 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 09:31:37.154126  295952 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 09:31:37.156860  295952 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 09:31:37.156878  295952 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 09:31:37.170223  295952 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 09:31:37.170295  295952 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 09:31:37.184638  295952 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 09:31:37.184660  295952 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 09:31:37.198046  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 09:31:37.396161  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:37.396556  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:37.618799  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:37.677495  295952 addons.go:479] Verifying addon gcp-auth=true in "addons-006674"
	I1018 09:31:37.682086  295952 out.go:179] * Verifying gcp-auth addon...
	I1018 09:31:37.685769  295952 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 09:31:37.723523  295952 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 09:31:37.723548  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:37.892319  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:37.896692  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:37.897338  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:38.114988  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:38.189435  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:38.241559  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:38.397247  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:38.397762  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:38.614945  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:38.689702  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:38.690284  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:38.690311  295952 retry.go:31] will retry after 651.86289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:38.894902  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:38.895597  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:39.115351  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:39.189396  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:39.342778  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:39.396615  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:39.396847  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:39.615325  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:39.689562  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:39.895985  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:39.896881  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:40.114878  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 09:31:40.166225  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:40.166259  295952 retry.go:31] will retry after 1.693297533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:40.189384  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:40.242316  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:40.395161  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:40.395996  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:40.615369  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:40.716106  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:40.894848  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:40.895077  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:41.114566  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:41.189142  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:41.395218  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:41.395371  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:41.614424  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:41.689162  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:41.860241  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:41.896798  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:41.897456  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:42.115582  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:42.190689  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:42.242808  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:42.396279  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:42.397580  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:42.614296  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 09:31:42.673566  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:42.673598  295952 retry.go:31] will retry after 1.299020929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:42.690011  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:42.895161  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:42.895528  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:43.114823  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:43.189179  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:43.395252  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:43.395540  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:43.614921  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:43.689332  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:43.894841  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:43.895312  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:43.973794  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:44.115706  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:44.188881  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:44.396133  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:44.396640  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:44.614862  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:44.689149  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:44.742586  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	W1018 09:31:44.765377  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:44.765447  295952 retry.go:31] will retry after 2.199782569s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:44.894563  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:44.895900  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:45.128597  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:45.194270  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:45.395852  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:45.396326  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:45.614700  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:45.689594  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:45.894577  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:45.895168  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:46.114615  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:46.189496  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:46.394184  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:46.395355  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:46.614078  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:46.688969  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:46.895254  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:46.895411  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:46.965725  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:47.115641  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:47.189603  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:47.241563  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:47.396879  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:47.397255  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:47.615528  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:47.689311  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:47.764300  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:47.764418  295952 retry.go:31] will retry after 2.641135294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:47.894342  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:47.895452  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:48.114998  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:48.188885  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:48.394319  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:48.394717  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:48.615207  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:48.689655  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:48.895746  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:48.895837  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:49.115008  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:49.189281  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:49.242290  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:49.395787  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:49.395604  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:49.614725  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:49.689535  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:49.897073  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:49.897160  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:50.114193  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:50.189562  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:50.394073  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:50.395121  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:50.406489  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:50.615330  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:50.689483  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:50.894941  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:50.896310  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:51.118505  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:51.189866  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:51.217326  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:51.217377  295952 retry.go:31] will retry after 4.387535304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:51.395275  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:51.395827  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:51.614679  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:51.689581  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:51.742234  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:51.894687  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:51.895366  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:52.114399  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:52.189127  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:52.394991  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:52.395608  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:52.614589  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:52.689531  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:52.895376  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:52.895512  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:53.114810  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:53.190125  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:53.394675  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:53.395723  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:53.615171  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:53.689321  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:53.742287  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:53.895654  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:53.895990  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:54.115022  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:54.195461  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:54.395321  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:54.396127  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:54.614290  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:54.689336  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:54.895545  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:54.895769  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:55.115046  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:55.189260  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:55.395007  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:55.395127  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:55.605165  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:31:55.615815  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:55.688750  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:55.897249  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:55.897585  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:56.114670  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:56.189401  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:56.242490  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:56.395985  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:56.396369  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 09:31:56.426252  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:56.426297  295952 retry.go:31] will retry after 12.838707612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:31:56.614263  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:56.689248  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:56.895135  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:56.895616  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:57.128449  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:57.189345  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:57.395494  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:57.395640  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:57.614720  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:57.688641  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:57.894937  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:57.895261  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:58.114173  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:58.188923  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:58.394652  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:58.395771  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:58.615121  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:58.689008  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:31:58.741867  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:31:58.895420  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:58.895792  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:59.115743  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:59.188706  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:59.394672  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:59.394759  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:31:59.614700  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:31:59.688575  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:31:59.894519  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:31:59.895205  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:00.121905  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:00.192788  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:00.398432  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:00.398489  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:00.615124  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:00.688883  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:32:00.742002  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:32:00.895290  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:00.895782  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:01.115541  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:01.189563  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:01.394235  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:01.395300  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:01.614400  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:01.689502  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:01.896164  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:01.895952  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:02.114541  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:02.189741  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:02.395707  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:02.396083  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:02.615374  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:02.689099  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:32:02.742193  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:32:02.895190  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:02.895491  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:03.114891  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:03.189593  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:03.394157  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:03.395638  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:03.615085  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:03.689141  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:03.895281  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:03.895656  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:04.115123  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:04.191094  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:04.395544  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:04.395692  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:04.614918  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:04.688631  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:04.894825  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:04.895703  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:05.114809  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:05.189005  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:32:05.242338  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:32:05.395728  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:05.395783  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:05.614964  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:05.688755  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:05.894548  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:05.895670  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:06.114769  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:06.189598  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:06.395608  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:06.395795  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:06.615183  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:06.688901  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:06.894376  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:06.895812  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:07.116134  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:07.188951  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:07.394567  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:07.395257  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:07.614316  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:07.689171  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:32:07.741835  295952 node_ready.go:57] node "addons-006674" has "Ready":"False" status (will retry)
	I1018 09:32:07.895072  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:07.895584  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:08.114639  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:08.189222  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:08.395092  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:08.395411  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:08.614479  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:08.689358  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:08.895316  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:08.895669  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:09.114800  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:09.189675  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:09.265593  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:32:09.493204  295952 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 09:32:09.493277  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:09.493468  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:09.621756  295952 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 09:32:09.621822  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:09.709455  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:09.824272  295952 node_ready.go:49] node "addons-006674" is "Ready"
	I1018 09:32:09.824350  295952 node_ready.go:38] duration metric: took 42.0858024s for node "addons-006674" to be "Ready" ...
	I1018 09:32:09.824387  295952 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:32:09.824475  295952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:32:09.911694  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:09.911821  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:10.124358  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:10.203061  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:10.397066  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:10.397584  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:10.625714  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:10.729772  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:10.897350  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:10.897908  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:11.002107  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.736473849s)
	W1018 09:32:11.002188  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:11.002261  295952 retry.go:31] will retry after 11.564156757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:11.002306  295952 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.177730308s)
	I1018 09:32:11.002343  295952 api_server.go:72] duration metric: took 43.73020793s to wait for apiserver process to appear ...
	I1018 09:32:11.002364  295952 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:32:11.002409  295952 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 09:32:11.011863  295952 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 09:32:11.013565  295952 api_server.go:141] control plane version: v1.34.1
	I1018 09:32:11.013639  295952 api_server.go:131] duration metric: took 11.248909ms to wait for apiserver health ...
	I1018 09:32:11.013663  295952 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:32:11.032455  295952 system_pods.go:59] 19 kube-system pods found
	I1018 09:32:11.032547  295952 system_pods.go:61] "coredns-66bc5c9577-kj5jb" [97c49f44-8c6f-4c14-a90f-31dfda93a372] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:11.032575  295952 system_pods.go:61] "csi-hostpath-attacher-0" [d324748f-0916-4247-b8da-42b067ee5ff2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 09:32:11.032599  295952 system_pods.go:61] "csi-hostpath-resizer-0" [05ce0eee-4f78-44d5-b868-c8c9f13f276b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 09:32:11.032632  295952 system_pods.go:61] "csi-hostpathplugin-rswxb" [feddf8af-e1d0-4f04-a39a-eedf70a898ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 09:32:11.032653  295952 system_pods.go:61] "etcd-addons-006674" [c14c573d-1612-42d5-87f8-4bd642899fae] Running
	I1018 09:32:11.032675  295952 system_pods.go:61] "kindnet-h49vl" [6a1383f0-1850-4dff-991e-4fa71596bb58] Running
	I1018 09:32:11.032695  295952 system_pods.go:61] "kube-apiserver-addons-006674" [e5eff75b-55ea-4fbb-a83d-cc8550c66472] Running
	I1018 09:32:11.032721  295952 system_pods.go:61] "kube-controller-manager-addons-006674" [6e9cccba-4dab-456b-b750-0ac8893a4371] Running
	I1018 09:32:11.032755  295952 system_pods.go:61] "kube-ingress-dns-minikube" [5d722dca-00c8-447d-a43c-acdb7b5482e3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 09:32:11.032775  295952 system_pods.go:61] "kube-proxy-k5bfv" [ecc37a01-e2c2-4a4c-a272-ac37b4cd96f3] Running
	I1018 09:32:11.032796  295952 system_pods.go:61] "kube-scheduler-addons-006674" [ac4de840-6b5c-491a-9cbc-2a080dbc17bf] Running
	I1018 09:32:11.032817  295952 system_pods.go:61] "metrics-server-85b7d694d7-szvm5" [7f1c3285-5e41-444e-a773-3a86f80ec0c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 09:32:11.032840  295952 system_pods.go:61] "nvidia-device-plugin-daemonset-j658f" [a582f724-b46c-4377-b626-fcf59ae12980] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 09:32:11.032862  295952 system_pods.go:61] "registry-6b586f9694-flkkz" [cd38f302-4660-4066-897a-e2246722c55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 09:32:11.032895  295952 system_pods.go:61] "registry-creds-764b6fb674-tjsdw" [23cd49e2-ec97-44a9-9bd9-370ba2b403c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 09:32:11.032917  295952 system_pods.go:61] "registry-proxy-46rp2" [99a95a84-cd1c-42d2-b8a8-a0bb70a90f31] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 09:32:11.032939  295952 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9pgqt" [afec06f4-3213-44ef-a323-a658ff117c82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.032963  295952 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rfbdb" [05444b46-9758-438d-92c4-5bf39c7165b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.032996  295952 system_pods.go:61] "storage-provisioner" [b7652d26-3d30-439f-a886-8dc4a69c9f1e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:11.033018  295952 system_pods.go:74] duration metric: took 19.330228ms to wait for pod list to return data ...
	I1018 09:32:11.033040  295952 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:32:11.039295  295952 default_sa.go:45] found service account: "default"
	I1018 09:32:11.039379  295952 default_sa.go:55] duration metric: took 6.316341ms for default service account to be created ...
	I1018 09:32:11.039404  295952 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:32:11.045254  295952 system_pods.go:86] 19 kube-system pods found
	I1018 09:32:11.045351  295952 system_pods.go:89] "coredns-66bc5c9577-kj5jb" [97c49f44-8c6f-4c14-a90f-31dfda93a372] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:11.045383  295952 system_pods.go:89] "csi-hostpath-attacher-0" [d324748f-0916-4247-b8da-42b067ee5ff2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 09:32:11.045407  295952 system_pods.go:89] "csi-hostpath-resizer-0" [05ce0eee-4f78-44d5-b868-c8c9f13f276b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 09:32:11.045445  295952 system_pods.go:89] "csi-hostpathplugin-rswxb" [feddf8af-e1d0-4f04-a39a-eedf70a898ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 09:32:11.045474  295952 system_pods.go:89] "etcd-addons-006674" [c14c573d-1612-42d5-87f8-4bd642899fae] Running
	I1018 09:32:11.045495  295952 system_pods.go:89] "kindnet-h49vl" [6a1383f0-1850-4dff-991e-4fa71596bb58] Running
	I1018 09:32:11.045524  295952 system_pods.go:89] "kube-apiserver-addons-006674" [e5eff75b-55ea-4fbb-a83d-cc8550c66472] Running
	I1018 09:32:11.045544  295952 system_pods.go:89] "kube-controller-manager-addons-006674" [6e9cccba-4dab-456b-b750-0ac8893a4371] Running
	I1018 09:32:11.045566  295952 system_pods.go:89] "kube-ingress-dns-minikube" [5d722dca-00c8-447d-a43c-acdb7b5482e3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 09:32:11.045585  295952 system_pods.go:89] "kube-proxy-k5bfv" [ecc37a01-e2c2-4a4c-a272-ac37b4cd96f3] Running
	I1018 09:32:11.045613  295952 system_pods.go:89] "kube-scheduler-addons-006674" [ac4de840-6b5c-491a-9cbc-2a080dbc17bf] Running
	I1018 09:32:11.045632  295952 system_pods.go:89] "metrics-server-85b7d694d7-szvm5" [7f1c3285-5e41-444e-a773-3a86f80ec0c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 09:32:11.045655  295952 system_pods.go:89] "nvidia-device-plugin-daemonset-j658f" [a582f724-b46c-4377-b626-fcf59ae12980] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 09:32:11.045689  295952 system_pods.go:89] "registry-6b586f9694-flkkz" [cd38f302-4660-4066-897a-e2246722c55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 09:32:11.045714  295952 system_pods.go:89] "registry-creds-764b6fb674-tjsdw" [23cd49e2-ec97-44a9-9bd9-370ba2b403c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 09:32:11.045735  295952 system_pods.go:89] "registry-proxy-46rp2" [99a95a84-cd1c-42d2-b8a8-a0bb70a90f31] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 09:32:11.045769  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9pgqt" [afec06f4-3213-44ef-a323-a658ff117c82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.045792  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rfbdb" [05444b46-9758-438d-92c4-5bf39c7165b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.045823  295952 system_pods.go:89] "storage-provisioner" [b7652d26-3d30-439f-a886-8dc4a69c9f1e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:11.045865  295952 retry.go:31] will retry after 207.462973ms: missing components: kube-dns
	I1018 09:32:11.115954  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:11.192279  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:11.258397  295952 system_pods.go:86] 19 kube-system pods found
	I1018 09:32:11.258481  295952 system_pods.go:89] "coredns-66bc5c9577-kj5jb" [97c49f44-8c6f-4c14-a90f-31dfda93a372] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:11.258505  295952 system_pods.go:89] "csi-hostpath-attacher-0" [d324748f-0916-4247-b8da-42b067ee5ff2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 09:32:11.258547  295952 system_pods.go:89] "csi-hostpath-resizer-0" [05ce0eee-4f78-44d5-b868-c8c9f13f276b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 09:32:11.258569  295952 system_pods.go:89] "csi-hostpathplugin-rswxb" [feddf8af-e1d0-4f04-a39a-eedf70a898ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 09:32:11.258589  295952 system_pods.go:89] "etcd-addons-006674" [c14c573d-1612-42d5-87f8-4bd642899fae] Running
	I1018 09:32:11.258610  295952 system_pods.go:89] "kindnet-h49vl" [6a1383f0-1850-4dff-991e-4fa71596bb58] Running
	I1018 09:32:11.258643  295952 system_pods.go:89] "kube-apiserver-addons-006674" [e5eff75b-55ea-4fbb-a83d-cc8550c66472] Running
	I1018 09:32:11.258663  295952 system_pods.go:89] "kube-controller-manager-addons-006674" [6e9cccba-4dab-456b-b750-0ac8893a4371] Running
	I1018 09:32:11.258686  295952 system_pods.go:89] "kube-ingress-dns-minikube" [5d722dca-00c8-447d-a43c-acdb7b5482e3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 09:32:11.258715  295952 system_pods.go:89] "kube-proxy-k5bfv" [ecc37a01-e2c2-4a4c-a272-ac37b4cd96f3] Running
	I1018 09:32:11.258732  295952 system_pods.go:89] "kube-scheduler-addons-006674" [ac4de840-6b5c-491a-9cbc-2a080dbc17bf] Running
	I1018 09:32:11.258761  295952 system_pods.go:89] "metrics-server-85b7d694d7-szvm5" [7f1c3285-5e41-444e-a773-3a86f80ec0c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 09:32:11.258791  295952 system_pods.go:89] "nvidia-device-plugin-daemonset-j658f" [a582f724-b46c-4377-b626-fcf59ae12980] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 09:32:11.258812  295952 system_pods.go:89] "registry-6b586f9694-flkkz" [cd38f302-4660-4066-897a-e2246722c55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 09:32:11.258833  295952 system_pods.go:89] "registry-creds-764b6fb674-tjsdw" [23cd49e2-ec97-44a9-9bd9-370ba2b403c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 09:32:11.258863  295952 system_pods.go:89] "registry-proxy-46rp2" [99a95a84-cd1c-42d2-b8a8-a0bb70a90f31] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 09:32:11.258891  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9pgqt" [afec06f4-3213-44ef-a323-a658ff117c82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.258911  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rfbdb" [05444b46-9758-438d-92c4-5bf39c7165b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.258943  295952 system_pods.go:89] "storage-provisioner" [b7652d26-3d30-439f-a886-8dc4a69c9f1e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:11.258973  295952 retry.go:31] will retry after 251.1907ms: missing components: kube-dns
	I1018 09:32:11.399846  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:11.405050  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:11.516559  295952 system_pods.go:86] 19 kube-system pods found
	I1018 09:32:11.516641  295952 system_pods.go:89] "coredns-66bc5c9577-kj5jb" [97c49f44-8c6f-4c14-a90f-31dfda93a372] Running
	I1018 09:32:11.516677  295952 system_pods.go:89] "csi-hostpath-attacher-0" [d324748f-0916-4247-b8da-42b067ee5ff2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 09:32:11.516700  295952 system_pods.go:89] "csi-hostpath-resizer-0" [05ce0eee-4f78-44d5-b868-c8c9f13f276b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 09:32:11.516725  295952 system_pods.go:89] "csi-hostpathplugin-rswxb" [feddf8af-e1d0-4f04-a39a-eedf70a898ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 09:32:11.516770  295952 system_pods.go:89] "etcd-addons-006674" [c14c573d-1612-42d5-87f8-4bd642899fae] Running
	I1018 09:32:11.516790  295952 system_pods.go:89] "kindnet-h49vl" [6a1383f0-1850-4dff-991e-4fa71596bb58] Running
	I1018 09:32:11.516811  295952 system_pods.go:89] "kube-apiserver-addons-006674" [e5eff75b-55ea-4fbb-a83d-cc8550c66472] Running
	I1018 09:32:11.516840  295952 system_pods.go:89] "kube-controller-manager-addons-006674" [6e9cccba-4dab-456b-b750-0ac8893a4371] Running
	I1018 09:32:11.516868  295952 system_pods.go:89] "kube-ingress-dns-minikube" [5d722dca-00c8-447d-a43c-acdb7b5482e3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 09:32:11.516889  295952 system_pods.go:89] "kube-proxy-k5bfv" [ecc37a01-e2c2-4a4c-a272-ac37b4cd96f3] Running
	I1018 09:32:11.516919  295952 system_pods.go:89] "kube-scheduler-addons-006674" [ac4de840-6b5c-491a-9cbc-2a080dbc17bf] Running
	I1018 09:32:11.516946  295952 system_pods.go:89] "metrics-server-85b7d694d7-szvm5" [7f1c3285-5e41-444e-a773-3a86f80ec0c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 09:32:11.516970  295952 system_pods.go:89] "nvidia-device-plugin-daemonset-j658f" [a582f724-b46c-4377-b626-fcf59ae12980] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 09:32:11.517002  295952 system_pods.go:89] "registry-6b586f9694-flkkz" [cd38f302-4660-4066-897a-e2246722c55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 09:32:11.517023  295952 system_pods.go:89] "registry-creds-764b6fb674-tjsdw" [23cd49e2-ec97-44a9-9bd9-370ba2b403c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 09:32:11.517052  295952 system_pods.go:89] "registry-proxy-46rp2" [99a95a84-cd1c-42d2-b8a8-a0bb70a90f31] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 09:32:11.517081  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9pgqt" [afec06f4-3213-44ef-a323-a658ff117c82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.517103  295952 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rfbdb" [05444b46-9758-438d-92c4-5bf39c7165b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 09:32:11.517132  295952 system_pods.go:89] "storage-provisioner" [b7652d26-3d30-439f-a886-8dc4a69c9f1e] Running
	I1018 09:32:11.517163  295952 system_pods.go:126] duration metric: took 477.740758ms to wait for k8s-apps to be running ...
	I1018 09:32:11.517206  295952 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:32:11.517298  295952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:32:11.534818  295952 system_svc.go:56] duration metric: took 17.602576ms WaitForService to wait for kubelet
	I1018 09:32:11.534916  295952 kubeadm.go:586] duration metric: took 44.262770068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:32:11.534950  295952 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:32:11.538438  295952 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:32:11.538518  295952 node_conditions.go:123] node cpu capacity is 2
	I1018 09:32:11.538547  295952 node_conditions.go:105] duration metric: took 3.578196ms to run NodePressure ...
	I1018 09:32:11.538573  295952 start.go:241] waiting for startup goroutines ...
	I1018 09:32:11.614943  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:11.689045  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:11.905081  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:11.905334  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:12.114599  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:12.189271  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:12.406519  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:12.414706  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:12.615914  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:12.688676  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:12.895613  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:12.895784  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:13.114849  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:13.188195  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:13.404052  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:13.404499  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:13.615731  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:13.688637  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:13.897453  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:13.897942  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:14.115135  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:14.188866  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:14.395759  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:14.395902  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:14.616724  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:14.688446  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:14.898034  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:14.898530  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:15.115347  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:15.189631  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:15.396961  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:15.397355  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:15.614847  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:15.689884  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:15.896934  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:15.897355  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:16.114951  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:16.189164  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:16.396434  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:16.396874  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:16.615592  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:16.689712  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:16.896695  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:16.897492  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:17.115036  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:17.190213  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:17.396004  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:17.396636  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:17.616141  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:17.689598  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:17.896899  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:17.897283  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:18.114399  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:18.189761  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:18.396903  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:18.397283  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:18.617139  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:18.717319  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:18.896113  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:18.896248  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:19.114885  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:19.189082  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:19.396717  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:19.397866  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:19.616194  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:19.689674  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:19.896985  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:19.897417  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:20.115230  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:20.190027  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:20.396892  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:20.397240  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:20.614768  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:20.688578  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:20.900667  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:20.901106  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:21.114702  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:21.194592  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:21.396970  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:21.397675  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:21.616221  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:21.689708  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:21.896577  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:21.896915  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:22.114496  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:22.190318  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:22.395917  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:22.396095  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:22.567495  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:32:22.615611  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:22.689373  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:22.896060  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:22.896190  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:23.114459  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:23.189257  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:23.395916  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:23.396457  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:23.615227  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:23.689977  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:23.779619  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.212086726s)
	W1018 09:32:23.779656  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:23.779675  295952 retry.go:31] will retry after 22.639093049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:23.895392  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:23.896375  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:24.114868  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:24.189669  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:24.394475  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:24.395953  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:24.615520  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:24.689110  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:24.896508  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:24.896601  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:25.114989  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:25.189328  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:25.397122  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:25.397380  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:25.614508  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:25.689310  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:25.894430  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:25.896651  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:26.115096  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:26.188993  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:26.396240  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:26.396373  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:26.614608  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:26.689389  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:26.896718  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:26.896993  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:27.116917  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:27.189679  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:27.396274  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:27.396805  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:27.615862  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:27.690178  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:27.895827  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:27.896351  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:28.115378  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:28.216354  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:28.396740  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:28.396844  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:28.615657  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:28.717666  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:28.899981  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:28.901369  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:29.115057  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:29.189111  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:29.396258  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:29.396432  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:29.614777  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:29.688814  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:29.897556  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:29.897688  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:30.116115  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:30.189518  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:30.395208  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:30.396385  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:30.615567  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:30.689456  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:30.895949  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:30.896093  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:31.114545  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:31.189793  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:31.395794  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:31.395939  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:31.614203  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:31.689214  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:31.896992  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:31.897439  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:32.114784  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:32.188844  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:32.397082  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:32.397512  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:32.615592  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:32.689875  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:32.895751  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:32.895959  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:33.115331  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:33.189363  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:33.395643  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:33.395794  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:33.615517  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:33.689567  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:33.895362  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:33.896577  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:34.115884  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:34.188798  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:34.396135  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:34.396464  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:34.615189  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:34.688947  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:34.900125  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:34.900780  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:35.117435  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:35.190238  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:35.394473  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:35.396297  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:35.615645  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:35.690088  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:35.895618  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:35.895776  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:36.115770  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:36.189027  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:36.396814  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:36.397581  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:36.615041  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:36.689021  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:36.896684  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:36.897009  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:37.114509  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:37.219904  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:37.395673  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:37.396035  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:37.616240  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:37.716178  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:37.896753  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:37.896970  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:38.115693  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:38.189302  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:38.395554  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:38.396641  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:38.615546  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:38.689544  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:38.896304  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:38.896799  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:39.115699  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:39.189766  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:39.396842  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:39.397444  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:39.615572  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:39.689167  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:39.896051  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:39.896440  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:40.117883  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:40.218829  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:40.396039  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:40.396566  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:40.616054  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:40.688976  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:40.896260  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:40.896411  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:41.117149  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:41.191684  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:41.396282  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:41.396949  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:41.614800  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:41.689504  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:41.897406  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:41.897736  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:42.120142  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:42.189903  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:42.395895  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:42.396076  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:42.614948  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:42.688905  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:42.897276  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:42.897446  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:43.115485  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:43.189305  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:43.408451  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:43.416016  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:43.615422  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:43.689731  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:43.896842  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:43.897390  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:44.115417  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:44.189992  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:44.396143  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:44.396181  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:44.615046  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:44.688992  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:44.895944  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:44.896152  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:45.138659  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:45.190768  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:45.395990  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:45.396135  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:45.614968  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:45.689479  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:45.896959  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:45.897436  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:46.115635  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:46.216198  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:46.396266  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:46.396427  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:46.419713  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:32:46.616569  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:46.691071  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:46.930899  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:46.931594  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:47.116393  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:47.194940  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:47.396002  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:47.396202  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:47.622265  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:47.689595  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:47.903927  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:47.904421  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:48.114906  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:48.189862  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:48.342754  295952 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.922999268s)
	W1018 09:32:48.342797  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:48.342818  295952 retry.go:31] will retry after 34.0679614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:32:48.395908  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:48.396052  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:48.614748  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:48.692227  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:48.896593  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:48.897554  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:49.115416  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:49.190201  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:49.396181  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:49.396587  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:49.615211  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:49.689646  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:49.899408  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:49.899553  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:50.115730  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:50.188851  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:50.395546  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:50.396673  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:50.615252  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:50.692352  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:50.897245  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:50.897845  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:51.115576  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:51.190069  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:51.395817  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:51.396303  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:51.614675  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:51.690139  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:51.897607  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:51.897702  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:52.114963  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:52.189924  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:52.396959  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:52.397310  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:52.615170  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:52.689423  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:52.897217  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:52.898059  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:53.117102  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:53.198343  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:53.398619  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:53.399991  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:53.618020  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:53.710913  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:53.897843  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:53.898310  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:54.121088  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:54.189060  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:54.396084  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:54.396190  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:54.614945  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:54.690338  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:54.899503  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:54.899668  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:55.115342  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:55.190723  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:55.395884  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:55.396051  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:55.614758  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:55.695372  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:55.895083  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:55.895670  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:56.122586  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:56.219702  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:56.394758  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:56.396199  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:56.614602  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:56.689083  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:56.896670  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:56.896875  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:57.115792  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:57.192174  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:57.396968  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:57.397684  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:57.617013  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:57.689215  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:57.896931  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:57.897302  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:58.115642  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:58.189861  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:58.396883  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:32:58.396956  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:58.616374  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:58.690333  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:58.895250  295952 kapi.go:107] duration metric: took 1m25.003865289s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 09:32:58.895925  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:59.115379  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:59.189444  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:59.396019  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:32:59.622143  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:32:59.688772  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:32:59.897306  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:00.115925  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:00.190463  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:00.397368  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:00.615054  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:00.689421  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:00.896116  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:01.116138  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:01.190003  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:01.395963  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:01.615658  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:01.689429  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:01.896495  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:02.115197  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:02.190062  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:02.396147  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:02.616684  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:02.692056  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:02.895922  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:03.122247  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:03.190797  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:03.395930  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:03.614191  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:03.690680  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:03.911673  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:04.119811  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:04.189408  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:04.396682  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:04.616088  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:04.690198  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:04.896303  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:05.117125  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:05.216474  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:05.396080  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:05.614712  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:05.691392  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:05.895705  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:06.115527  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:06.190008  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:06.396336  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:06.614466  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:06.689211  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:06.895614  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:07.115582  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:07.189976  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:07.396506  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:07.614950  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:07.690057  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:07.896148  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:08.118500  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:08.189089  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:08.395393  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:08.614914  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:08.689085  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:08.895940  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:09.115564  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:09.190113  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:09.410455  295952 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:33:09.620063  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:09.689073  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:09.897018  295952 kapi.go:107] duration metric: took 1m36.004771364s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 09:33:10.116229  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:10.189357  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:10.656775  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:10.693435  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:11.115348  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:11.189764  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:11.614763  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:11.715044  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:12.114722  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:12.192780  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:12.619571  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:12.689917  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:13.116367  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:13.189211  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:13.615659  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:13.690048  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:14.115479  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:14.189516  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:14.614872  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:14.693265  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:15.114701  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:15.189118  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:15.615131  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:15.688629  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:16.115333  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:16.190189  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:16.614301  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:16.689113  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:17.114567  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:17.189970  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:17.615202  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:17.689255  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:33:18.117584  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:18.189365  295952 kapi.go:107] duration metric: took 1m40.503599602s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 09:33:18.192321  295952 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-006674 cluster.
	I1018 09:33:18.195223  295952 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 09:33:18.198012  295952 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 09:33:18.615155  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:19.115073  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:19.614870  295952 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:33:20.114898  295952 kapi.go:107] duration metric: took 1m45.503768448s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 09:33:22.411775  295952 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 09:33:23.306200  295952 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 09:33:23.306297  295952 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 09:33:23.311109  295952 out.go:179] * Enabled addons: registry-creds, ingress-dns, amd-gpu-device-plugin, storage-provisioner-rancher, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, default-storageclass, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 09:33:23.314110  295952 addons.go:514] duration metric: took 1m56.041488915s for enable addons: enabled=[registry-creds ingress-dns amd-gpu-device-plugin storage-provisioner-rancher cloud-spanner storage-provisioner nvidia-device-plugin metrics-server default-storageclass yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 09:33:23.314186  295952 start.go:246] waiting for cluster config update ...
	I1018 09:33:23.314209  295952 start.go:255] writing updated cluster config ...
	I1018 09:33:23.314547  295952 ssh_runner.go:195] Run: rm -f paused
	I1018 09:33:23.318795  295952 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:23.322965  295952 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kj5jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.328121  295952 pod_ready.go:94] pod "coredns-66bc5c9577-kj5jb" is "Ready"
	I1018 09:33:23.328151  295952 pod_ready.go:86] duration metric: took 5.153969ms for pod "coredns-66bc5c9577-kj5jb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.330698  295952 pod_ready.go:83] waiting for pod "etcd-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.336124  295952 pod_ready.go:94] pod "etcd-addons-006674" is "Ready"
	I1018 09:33:23.336152  295952 pod_ready.go:86] duration metric: took 5.425994ms for pod "etcd-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.338468  295952 pod_ready.go:83] waiting for pod "kube-apiserver-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.343397  295952 pod_ready.go:94] pod "kube-apiserver-addons-006674" is "Ready"
	I1018 09:33:23.343431  295952 pod_ready.go:86] duration metric: took 4.937166ms for pod "kube-apiserver-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.346323  295952 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.722541  295952 pod_ready.go:94] pod "kube-controller-manager-addons-006674" is "Ready"
	I1018 09:33:23.722574  295952 pod_ready.go:86] duration metric: took 376.224452ms for pod "kube-controller-manager-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:23.923288  295952 pod_ready.go:83] waiting for pod "kube-proxy-k5bfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:24.322983  295952 pod_ready.go:94] pod "kube-proxy-k5bfv" is "Ready"
	I1018 09:33:24.323020  295952 pod_ready.go:86] duration metric: took 399.703074ms for pod "kube-proxy-k5bfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:24.522946  295952 pod_ready.go:83] waiting for pod "kube-scheduler-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:24.923376  295952 pod_ready.go:94] pod "kube-scheduler-addons-006674" is "Ready"
	I1018 09:33:24.923404  295952 pod_ready.go:86] duration metric: took 400.428455ms for pod "kube-scheduler-addons-006674" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:24.923416  295952 pod_ready.go:40] duration metric: took 1.604591127s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:24.994791  295952 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:33:24.998663  295952 out.go:179] * Done! kubectl is now configured to use "addons-006674" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:34:04 addons-006674 crio[829]: time="2025-10-18T09:34:04.662638419Z" level=info msg="Started container" PID=5630 containerID=e66ddcd56a05f7f7158fc29d549958d5acd3eb638ed8d433caacf7e5b590e2bf description=default/test-local-path/busybox id=9b751c1e-0de9-49e1-ab33-54aa11a8b16e name=/runtime.v1.RuntimeService/StartContainer sandboxID=655718c5b22e7f16f52d73c4b5ba49de7338a07bb52d34616cb0e271f2d091ab
	Oct 18 09:34:06 addons-006674 crio[829]: time="2025-10-18T09:34:06.06389124Z" level=info msg="Stopping pod sandbox: 655718c5b22e7f16f52d73c4b5ba49de7338a07bb52d34616cb0e271f2d091ab" id=50ab9bc5-2fb8-4bcf-a0f4-d2eb1e190c41 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:34:06 addons-006674 crio[829]: time="2025-10-18T09:34:06.064199722Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:655718c5b22e7f16f52d73c4b5ba49de7338a07bb52d34616cb0e271f2d091ab UID:227845a1-4c60-4c2e-96dc-bc4f74d57561 NetNS:/var/run/netns/a2b96cd9-beb9-4edb-a587-566854d21189 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40001309e8}] Aliases:map[]}"
	Oct 18 09:34:06 addons-006674 crio[829]: time="2025-10-18T09:34:06.06434426Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:34:06 addons-006674 crio[829]: time="2025-10-18T09:34:06.087492758Z" level=info msg="Stopped pod sandbox: 655718c5b22e7f16f52d73c4b5ba49de7338a07bb52d34616cb0e271f2d091ab" id=50ab9bc5-2fb8-4bcf-a0f4-d2eb1e190c41 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.719676169Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e/POD" id=cc421f99-5c0e-4a86-b468-8b164d439ea7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.719740352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.728195006Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e Namespace:local-path-storage ID:9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78 UID:f10908d4-0183-470e-8368-efa28b023a6c NetNS:/var/run/netns/0228aa64-14a6-46af-a486-24da65646028 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d32898}] Aliases:map[]}"
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.728397326Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.739219227Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e Namespace:local-path-storage ID:9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78 UID:f10908d4-0183-470e-8368-efa28b023a6c NetNS:/var/run/netns/0228aa64-14a6-46af-a486-24da65646028 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d32898}] Aliases:map[]}"
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.739580125Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e for CNI network kindnet (type=ptp)"
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.744503654Z" level=info msg="Ran pod sandbox 9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78 with infra container: local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e/POD" id=cc421f99-5c0e-4a86-b468-8b164d439ea7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.745851234Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=78737996-cfeb-4352-9a4a-5ea6186aa98e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.753389625Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=cae9573c-30fa-4516-8bd4-4fa5758f5ef2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.763641177Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e/helper-pod" id=c60c841f-a57b-4f8e-9a1c-90749f900881 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.765151404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.780443791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.781335848Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.820215672Z" level=info msg="Created container 72832a1b553f8129ddd73512dea027a7a66e979b00aead7af590292a0a8a8c9a: local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e/helper-pod" id=c60c841f-a57b-4f8e-9a1c-90749f900881 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.821606748Z" level=info msg="Starting container: 72832a1b553f8129ddd73512dea027a7a66e979b00aead7af590292a0a8a8c9a" id=9a602358-2d2d-466d-8679-7c233bc8e506 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:34:07 addons-006674 crio[829]: time="2025-10-18T09:34:07.823488105Z" level=info msg="Started container" PID=5711 containerID=72832a1b553f8129ddd73512dea027a7a66e979b00aead7af590292a0a8a8c9a description=local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e/helper-pod id=9a602358-2d2d-466d-8679-7c233bc8e506 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78
	Oct 18 09:34:09 addons-006674 crio[829]: time="2025-10-18T09:34:09.07836534Z" level=info msg="Stopping pod sandbox: 9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78" id=04a721d1-71e3-4942-9083-5add6274f4fe name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:34:09 addons-006674 crio[829]: time="2025-10-18T09:34:09.078666823Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e Namespace:local-path-storage ID:9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78 UID:f10908d4-0183-470e-8368-efa28b023a6c NetNS:/var/run/netns/0228aa64-14a6-46af-a486-24da65646028 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001d33078}] Aliases:map[]}"
	Oct 18 09:34:09 addons-006674 crio[829]: time="2025-10-18T09:34:09.078800767Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e from CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:34:09 addons-006674 crio[829]: time="2025-10-18T09:34:09.109635239Z" level=info msg="Stopped pod sandbox: 9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78" id=04a721d1-71e3-4942-9083-5add6274f4fe name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	72832a1b553f8       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             6 seconds ago        Exited              helper-pod                               0                   9e45911b8bb57       helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e   local-path-storage
	e66ddcd56a05f       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            9 seconds ago        Exited              busybox                                  0                   655718c5b22e7       test-local-path                                              default
	86b67cfdb9b4a       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          26 seconds ago       Exited              registry-test                            0                   986d63e059232       registry-test                                                default
	f68068950f0ae       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          46 seconds ago       Running             busybox                                  0                   1ab132f027cab       busybox                                                      default
	92482b56ebf75       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          55 seconds ago       Running             csi-snapshotter                          0                   057bb08f47fad       csi-hostpathplugin-rswxb                                     kube-system
	bb97628e9a17a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 57 seconds ago       Running             gcp-auth                                 0                   d93a4f2b7e8f1       gcp-auth-78565c9fb4-m69wg                                    gcp-auth
	2668cbad9c190       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          About a minute ago   Running             csi-provisioner                          0                   057bb08f47fad       csi-hostpathplugin-rswxb                                     kube-system
	3868b4ac74b7b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   057bb08f47fad       csi-hostpathplugin-rswxb                                     kube-system
	03c9c979a54ef       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   057bb08f47fad       csi-hostpathplugin-rswxb                                     kube-system
	9d6a5e7844b19       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   057bb08f47fad       csi-hostpathplugin-rswxb                                     kube-system
	a1e4b9d5843fa       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             About a minute ago   Running             controller                               0                   43228e15490f7       ingress-nginx-controller-675c5ddd98-fjw9h                    ingress-nginx
	c33cff2bf27b1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            About a minute ago   Running             gadget                                   0                   6974073f293e4       gadget-77zfw                                                 gadget
	025d3e64c63bd       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   1b7c0aeaf614c       registry-proxy-46rp2                                         kube-system
	6dc8838bc5eae       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago   Exited              patch                                    3                   9fbcf46338ec4       gcp-auth-certs-patch-9nrz4                                   gcp-auth
	e66aaf86ae284       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   ea082e8f2f885       csi-hostpath-resizer-0                                       kube-system
	fc5f92cc54e39       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   6f5c11abd95bb       snapshot-controller-7d9fbc56b8-rfbdb                         kube-system
	7847608ada903       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   0c5c75733b5bb       gcp-auth-certs-create-gtjdn                                  gcp-auth
	4ed69c6d109cc       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   9d84459635af9       metrics-server-85b7d694d7-szvm5                              kube-system
	442597e183407       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   057bb08f47fad       csi-hostpathplugin-rswxb                                     kube-system
	54b6974a01255       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   8c524d7e32519       registry-6b586f9694-flkkz                                    kube-system
	0a966a2cc2562       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   5e1a17f615416       local-path-provisioner-648f6765c9-848bh                      local-path-storage
	d7a1cd7ba1844       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   49db078e1621a       kube-ingress-dns-minikube                                    kube-system
	59dad7f7df257       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago   Exited              patch                                    2                   e44195682fefd       ingress-nginx-admission-patch-rlk7w                          ingress-nginx
	7c2c142c2c3e7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   0b5298e8c04ff       ingress-nginx-admission-create-zp84p                         ingress-nginx
	1aec9843e6b35       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   3636cd0863c74       nvidia-device-plugin-daemonset-j658f                         kube-system
	fdaf99bae646f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   6720a28ae23c3       csi-hostpath-attacher-0                                      kube-system
	26c85f4442390       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   50196ba9e7bb6       yakd-dashboard-5ff678cb9-zncv4                               yakd-dashboard
	faa7882723437       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   54e1ed93b01e7       snapshot-controller-7d9fbc56b8-9pgqt                         kube-system
	94c0691b24391       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   3ecc2127e7c4e       cloud-spanner-emulator-86bd5cbb97-ld2w5                      default
	8ba1ab4998b33       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             2 minutes ago        Running             storage-provisioner                      0                   0d8f1f49f68b9       storage-provisioner                                          kube-system
	7a4cd51451e05       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             2 minutes ago        Running             coredns                                  0                   bf1e3789f3182       coredns-66bc5c9577-kj5jb                                     kube-system
	ee39b4a9868c7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   d08071cc6edec       kube-proxy-k5bfv                                             kube-system
	6864cc8c9035c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   3c55ceb1a3be3       kindnet-h49vl                                                kube-system
	7c7055bef3a7a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   4b50b5a00e3f5       kube-apiserver-addons-006674                                 kube-system
	265553ed8d31e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   6cbf64983ec75       kube-scheduler-addons-006674                                 kube-system
	218e3162f40e7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   a7b0f6f557485       etcd-addons-006674                                           kube-system
	ca64f5775c712       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   3c2c7b7a2927b       kube-controller-manager-addons-006674                        kube-system
	
	
	==> coredns [7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08] <==
	[INFO] 10.244.0.9:48205 - 59177 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003122478s
	[INFO] 10.244.0.9:48205 - 41901 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000097349s
	[INFO] 10.244.0.9:48205 - 31786 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000092622s
	[INFO] 10.244.0.9:39720 - 56159 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131451s
	[INFO] 10.244.0.9:39720 - 55913 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00015838s
	[INFO] 10.244.0.9:54791 - 31067 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081365s
	[INFO] 10.244.0.9:54791 - 31264 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000127734s
	[INFO] 10.244.0.9:53474 - 1053 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123467s
	[INFO] 10.244.0.9:53474 - 872 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000163337s
	[INFO] 10.244.0.9:57779 - 37705 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001362865s
	[INFO] 10.244.0.9:57779 - 37942 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001395621s
	[INFO] 10.244.0.9:51830 - 52679 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121481s
	[INFO] 10.244.0.9:51830 - 52852 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000614928s
	[INFO] 10.244.0.21:48710 - 33868 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000161638s
	[INFO] 10.244.0.21:36211 - 54023 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000157921s
	[INFO] 10.244.0.21:34339 - 25166 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117788s
	[INFO] 10.244.0.21:51831 - 11352 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127922s
	[INFO] 10.244.0.21:34666 - 47358 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092311s
	[INFO] 10.244.0.21:40122 - 10564 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082054s
	[INFO] 10.244.0.21:40639 - 26495 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002174058s
	[INFO] 10.244.0.21:41310 - 11597 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002413386s
	[INFO] 10.244.0.21:44454 - 13966 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00243069s
	[INFO] 10.244.0.21:39046 - 33723 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002488235s
	[INFO] 10.244.0.23:40860 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000260726s
	[INFO] 10.244.0.23:47996 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145326s
	
	
	==> describe nodes <==
	Name:               addons-006674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-006674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=addons-006674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_31_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-006674
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-006674"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:31:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-006674
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:34:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:33:56 +0000   Sat, 18 Oct 2025 09:31:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:33:56 +0000   Sat, 18 Oct 2025 09:31:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:33:56 +0000   Sat, 18 Oct 2025 09:31:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:33:56 +0000   Sat, 18 Oct 2025 09:32:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-006674
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                5f95656e-8cd5-4065-8611-2240f79f89f6
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  default                     cloud-spanner-emulator-86bd5cbb97-ld2w5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  gadget                      gadget-77zfw                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  gcp-auth                    gcp-auth-78565c9fb4-m69wg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-fjw9h    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m41s
	  kube-system                 coredns-66bc5c9577-kj5jb                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m46s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 csi-hostpathplugin-rswxb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 etcd-addons-006674                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m51s
	  kube-system                 kindnet-h49vl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m46s
	  kube-system                 kube-apiserver-addons-006674                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-controller-manager-addons-006674        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 kube-proxy-k5bfv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  kube-system                 kube-scheduler-addons-006674                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 metrics-server-85b7d694d7-szvm5              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m42s
	  kube-system                 nvidia-device-plugin-daemonset-j658f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 registry-6b586f9694-flkkz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 registry-creds-764b6fb674-tjsdw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 registry-proxy-46rp2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-9pgqt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 snapshot-controller-7d9fbc56b8-rfbdb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  local-path-storage          local-path-provisioner-648f6765c9-848bh      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-zncv4               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m44s  kube-proxy       
	  Normal   Starting                 2m52s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m52s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m51s  kubelet          Node addons-006674 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m51s  kubelet          Node addons-006674 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m51s  kubelet          Node addons-006674 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m47s  node-controller  Node addons-006674 event: Registered Node addons-006674 in Controller
	  Normal   NodeReady                2m5s   kubelet          Node addons-006674 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015604] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504512] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034321] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.754127] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.006986] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 08:37] hrtimer: interrupt took 52245394 ns
	[Oct18 08:40] FS-Cache: Duplicate cookie detected
	[  +0.000820] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=0000000012c02099{9P.session} n=0000000039d56c98
	[  +0.001191] FS-Cache: O-key=[10] '34323935323339393835'
	[  +0.000847] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001040] FS-Cache: N-cookie d=0000000012c02099{9P.session} n=00000000aa671ad4
	[  +0.001145] FS-Cache: N-key=[10] '34323935323339393835'
	[Oct18 09:29] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[  +0.081210] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24] <==
	{"level":"warn","ts":"2025-10-18T09:31:18.819004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.835226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.887677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.895827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.904764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.921368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.942678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.955514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.977084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:18.997049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.008106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.029415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.042537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.066881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.083887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.113876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.141356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.150430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:19.254324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:34.753297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:34.771974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:57.089047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:57.103080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:57.151868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:57.168947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [bb97628e9a17acaa4875c83f17618a29d2209fb42d631e76e614a0da27d47629] <==
	2025/10/18 09:33:17 GCP Auth Webhook started!
	2025/10/18 09:33:25 Ready to marshal response ...
	2025/10/18 09:33:25 Ready to write response ...
	2025/10/18 09:33:26 Ready to marshal response ...
	2025/10/18 09:33:26 Ready to write response ...
	2025/10/18 09:33:26 Ready to marshal response ...
	2025/10/18 09:33:26 Ready to write response ...
	2025/10/18 09:33:46 Ready to marshal response ...
	2025/10/18 09:33:46 Ready to write response ...
	2025/10/18 09:33:46 Ready to marshal response ...
	2025/10/18 09:33:46 Ready to write response ...
	2025/10/18 09:33:59 Ready to marshal response ...
	2025/10/18 09:33:59 Ready to write response ...
	2025/10/18 09:33:59 Ready to marshal response ...
	2025/10/18 09:33:59 Ready to write response ...
	2025/10/18 09:34:07 Ready to marshal response ...
	2025/10/18 09:34:07 Ready to write response ...
	
	
	==> kernel <==
	 09:34:15 up  1:16,  0 user,  load average: 1.92, 2.50, 2.82
	Linux addons-006674 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6] <==
	I1018 09:32:09.021391       1 main.go:301] handling current node
	I1018 09:32:19.019477       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:32:19.019545       1 main.go:301] handling current node
	I1018 09:32:29.018629       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:32:29.018656       1 main.go:301] handling current node
	I1018 09:32:39.018912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:32:39.018984       1 main.go:301] handling current node
	I1018 09:32:49.018871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:32:49.018933       1 main.go:301] handling current node
	I1018 09:32:59.019133       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:32:59.019190       1 main.go:301] handling current node
	I1018 09:33:09.018706       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:33:09.018737       1 main.go:301] handling current node
	I1018 09:33:19.019545       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:33:19.019655       1 main.go:301] handling current node
	I1018 09:33:29.018783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:33:29.018819       1 main.go:301] handling current node
	I1018 09:33:39.018469       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:33:39.018502       1 main.go:301] handling current node
	I1018 09:33:49.018771       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:33:49.018807       1 main.go:301] handling current node
	I1018 09:33:59.018656       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:33:59.018768       1 main.go:301] handling current node
	I1018 09:34:09.019534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:34:09.019564       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f] <==
	W1018 09:32:09.435502       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.235.107:443: connect: connection refused
	E1018 09:32:09.435549       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.235.107:443: connect: connection refused" logger="UnhandledError"
	W1018 09:32:33.188792       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 09:32:33.188873       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 09:32:33.188884       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 09:32:33.190588       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 09:32:33.190621       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 09:32:33.190636       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1018 09:32:54.741127       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.130.149:443: connect: connection refused" logger="UnhandledError"
	W1018 09:32:54.741262       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 09:32:54.741325       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 09:32:54.742064       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.130.149:443: connect: connection refused" logger="UnhandledError"
	E1018 09:32:54.747517       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.130.149:443: connect: connection refused" logger="UnhandledError"
	E1018 09:32:54.769311       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.130.149:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.130.149:443: connect: connection refused" logger="UnhandledError"
	I1018 09:32:54.921213       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 09:33:35.350876       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37626: use of closed network connection
	E1018 09:33:35.576413       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37656: use of closed network connection
	E1018 09:33:35.716989       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37684: use of closed network connection
	I1018 09:33:59.826286       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c] <==
	I1018 09:31:27.068099       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:31:27.076250       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:31:27.076524       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:31:27.079802       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-006674" podCIDRs=["10.244.0.0/24"]
	I1018 09:31:27.080271       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:31:27.088781       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:31:27.090092       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:31:27.096379       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:31:27.097413       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:31:27.108319       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:31:27.110558       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:31:27.110573       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:31:27.110588       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:31:27.111691       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	E1018 09:31:32.312820       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 09:31:57.081578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 09:31:57.081828       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 09:31:57.081888       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 09:31:57.114886       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 09:31:57.143404       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 09:31:57.183654       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:31:57.244446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:32:12.032720       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1018 09:32:27.191364       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 09:32:27.252498       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc] <==
	I1018 09:31:30.374194       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:31:30.495609       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:31:30.596630       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:31:30.596670       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 09:31:30.596734       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:31:30.667799       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:31:30.667848       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:31:30.680465       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:31:30.680745       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:31:30.680765       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:31:30.684052       1 config.go:200] "Starting service config controller"
	I1018 09:31:30.684064       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:31:30.684080       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:31:30.684085       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:31:30.684096       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:31:30.684099       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:31:30.684749       1 config.go:309] "Starting node config controller"
	I1018 09:31:30.684756       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:31:30.684761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:31:30.787708       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:31:30.787747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:31:30.787788       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6] <==
	E1018 09:31:20.119759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:31:20.120049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:31:20.120216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:31:20.120277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:31:20.121794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:31:20.122236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:31:20.122433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:31:20.122538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:31:20.122703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:31:20.122826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:31:20.123113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:31:20.123300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:31:21.015339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:31:21.021902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:31:21.060217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:31:21.087464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:31:21.202723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:31:21.236926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:31:21.279066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:31:21.306705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:31:21.319493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:31:21.342332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 09:31:21.346901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:31:21.387833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1018 09:31:23.905086       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:34:06 addons-006674 kubelet[1287]: I1018 09:34:06.317387    1287 reconciler_common.go:299] "Volume detached for volume \"pvc-a1742402-0986-435b-8326-e21304879a9e\" (UniqueName: \"kubernetes.io/host-path/227845a1-4c60-4c2e-96dc-bc4f74d57561-pvc-a1742402-0986-435b-8326-e21304879a9e\") on node \"addons-006674\" DevicePath \"\""
	Oct 18 09:34:07 addons-006674 kubelet[1287]: I1018 09:34:07.068401    1287 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="655718c5b22e7f16f52d73c4b5ba49de7338a07bb52d34616cb0e271f2d091ab"
	Oct 18 09:34:07 addons-006674 kubelet[1287]: I1018 09:34:07.527135    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f10908d4-0183-470e-8368-efa28b023a6c-data\") pod \"helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e\" (UID: \"f10908d4-0183-470e-8368-efa28b023a6c\") " pod="local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e"
	Oct 18 09:34:07 addons-006674 kubelet[1287]: I1018 09:34:07.527688    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f10908d4-0183-470e-8368-efa28b023a6c-script\") pod \"helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e\" (UID: \"f10908d4-0183-470e-8368-efa28b023a6c\") " pod="local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e"
	Oct 18 09:34:07 addons-006674 kubelet[1287]: I1018 09:34:07.527800    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wdg6\" (UniqueName: \"kubernetes.io/projected/f10908d4-0183-470e-8368-efa28b023a6c-kube-api-access-8wdg6\") pod \"helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e\" (UID: \"f10908d4-0183-470e-8368-efa28b023a6c\") " pod="local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e"
	Oct 18 09:34:07 addons-006674 kubelet[1287]: I1018 09:34:07.527914    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f10908d4-0183-470e-8368-efa28b023a6c-gcp-creds\") pod \"helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e\" (UID: \"f10908d4-0183-470e-8368-efa28b023a6c\") " pod="local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e"
	Oct 18 09:34:07 addons-006674 kubelet[1287]: W1018 09:34:07.742187    1287 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2a58daa84df606f1a3eacd3ed59a710b3ede45b497c9ff78e57c7c851671ea0c/crio-9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78 WatchSource:0}: Error finding container 9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78: Status 404 returned error can't find the container with id 9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.028133    1287 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227845a1-4c60-4c2e-96dc-bc4f74d57561" path="/var/lib/kubelet/pods/227845a1-4c60-4c2e-96dc-bc4f74d57561/volumes"
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.242323    1287 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f10908d4-0183-470e-8368-efa28b023a6c-gcp-creds\") pod \"f10908d4-0183-470e-8368-efa28b023a6c\" (UID: \"f10908d4-0183-470e-8368-efa28b023a6c\") "
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.242401    1287 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f10908d4-0183-470e-8368-efa28b023a6c-script\") pod \"f10908d4-0183-470e-8368-efa28b023a6c\" (UID: \"f10908d4-0183-470e-8368-efa28b023a6c\") "
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.242454    1287 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wdg6\" (UniqueName: \"kubernetes.io/projected/f10908d4-0183-470e-8368-efa28b023a6c-kube-api-access-8wdg6\") pod \"f10908d4-0183-470e-8368-efa28b023a6c\" (UID: \"f10908d4-0183-470e-8368-efa28b023a6c\") "
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.242498    1287 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f10908d4-0183-470e-8368-efa28b023a6c-data\") pod \"f10908d4-0183-470e-8368-efa28b023a6c\" (UID: \"f10908d4-0183-470e-8368-efa28b023a6c\") "
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.242679    1287 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f10908d4-0183-470e-8368-efa28b023a6c-data" (OuterVolumeSpecName: "data") pod "f10908d4-0183-470e-8368-efa28b023a6c" (UID: "f10908d4-0183-470e-8368-efa28b023a6c"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.243040    1287 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f10908d4-0183-470e-8368-efa28b023a6c-script" (OuterVolumeSpecName: "script") pod "f10908d4-0183-470e-8368-efa28b023a6c" (UID: "f10908d4-0183-470e-8368-efa28b023a6c"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.243167    1287 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f10908d4-0183-470e-8368-efa28b023a6c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f10908d4-0183-470e-8368-efa28b023a6c" (UID: "f10908d4-0183-470e-8368-efa28b023a6c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.245216    1287 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10908d4-0183-470e-8368-efa28b023a6c-kube-api-access-8wdg6" (OuterVolumeSpecName: "kube-api-access-8wdg6") pod "f10908d4-0183-470e-8368-efa28b023a6c" (UID: "f10908d4-0183-470e-8368-efa28b023a6c"). InnerVolumeSpecName "kube-api-access-8wdg6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.343882    1287 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f10908d4-0183-470e-8368-efa28b023a6c-data\") on node \"addons-006674\" DevicePath \"\""
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.343932    1287 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f10908d4-0183-470e-8368-efa28b023a6c-gcp-creds\") on node \"addons-006674\" DevicePath \"\""
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.343944    1287 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f10908d4-0183-470e-8368-efa28b023a6c-script\") on node \"addons-006674\" DevicePath \"\""
	Oct 18 09:34:09 addons-006674 kubelet[1287]: I1018 09:34:09.343955    1287 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8wdg6\" (UniqueName: \"kubernetes.io/projected/f10908d4-0183-470e-8368-efa28b023a6c-kube-api-access-8wdg6\") on node \"addons-006674\" DevicePath \"\""
	Oct 18 09:34:10 addons-006674 kubelet[1287]: I1018 09:34:10.084258    1287 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e45911b8bb5799d4d5488d025fb93cd06510d124a5d51e44eff3a25a8263c78"
	Oct 18 09:34:10 addons-006674 kubelet[1287]: E1018 09:34:10.086516    1287 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e\" is forbidden: User \"system:node:addons-006674\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-006674' and this object" podUID="f10908d4-0183-470e-8368-efa28b023a6c" pod="local-path-storage/helper-pod-delete-pvc-a1742402-0986-435b-8326-e21304879a9e"
	Oct 18 09:34:11 addons-006674 kubelet[1287]: I1018 09:34:11.028654    1287 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10908d4-0183-470e-8368-efa28b023a6c" path="/var/lib/kubelet/pods/f10908d4-0183-470e-8368-efa28b023a6c/volumes"
	Oct 18 09:34:12 addons-006674 kubelet[1287]: E1018 09:34:12.467693    1287 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-tjsdw" podUID="23cd49e2-ec97-44a9-9bd9-370ba2b403c4"
	Oct 18 09:34:15 addons-006674 kubelet[1287]: I1018 09:34:15.025504    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-46rp2" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744] <==
	W1018 09:33:49.433025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:51.437010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:51.442424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:53.446025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:53.450596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:55.453515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:55.458075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:57.461576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:57.466416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:59.469307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:59.473837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:01.477841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:01.482293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:03.485548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:03.490494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:05.495294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:05.502909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:07.506257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:07.514077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:09.517860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:09.526022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:11.530474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:11.536256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:13.539740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:13.545352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006674 -n addons-006674
helpers_test.go:269: (dbg) Run:  kubectl --context addons-006674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-zp84p ingress-nginx-admission-patch-rlk7w registry-creds-764b6fb674-tjsdw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-006674 describe pod ingress-nginx-admission-create-zp84p ingress-nginx-admission-patch-rlk7w registry-creds-764b6fb674-tjsdw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-006674 describe pod ingress-nginx-admission-create-zp84p ingress-nginx-admission-patch-rlk7w registry-creds-764b6fb674-tjsdw: exit status 1 (85.112068ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zp84p" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rlk7w" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-tjsdw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-006674 describe pod ingress-nginx-admission-create-zp84p ingress-nginx-admission-patch-rlk7w registry-creds-764b6fb674-tjsdw: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable headlamp --alsologtostderr -v=1: exit status 11 (289.98954ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:16.307205  303647 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:16.307937  303647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:16.307972  303647 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:16.307994  303647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:16.308280  303647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:34:16.308615  303647 mustload.go:65] Loading cluster: addons-006674
	I1018 09:34:16.309067  303647 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:16.309100  303647 addons.go:606] checking whether the cluster is paused
	I1018 09:34:16.309261  303647 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:16.309288  303647 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:34:16.309756  303647 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:34:16.327609  303647 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:16.327668  303647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:34:16.345814  303647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:34:16.453734  303647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:16.453837  303647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:16.485336  303647 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:34:16.485364  303647 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:34:16.485370  303647 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:34:16.485373  303647 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:34:16.485377  303647 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:34:16.485380  303647 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:34:16.485384  303647 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:34:16.485386  303647 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:34:16.485391  303647 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:34:16.485397  303647 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:34:16.485400  303647 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:34:16.485404  303647 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:34:16.485407  303647 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:34:16.485410  303647 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:34:16.485418  303647 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:34:16.485424  303647 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:34:16.485431  303647 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:34:16.485435  303647 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:34:16.485438  303647 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:34:16.485441  303647 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:34:16.485446  303647 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:34:16.485449  303647 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:34:16.485452  303647 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:34:16.485455  303647 cri.go:89] found id: ""
	I1018 09:34:16.485507  303647 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:16.514319  303647 out.go:203] 
	W1018 09:34:16.519911  303647 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:16.519937  303647 out.go:285] * 
	* 
	W1018 09:34:16.526801  303647 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:16.531851  303647 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-ld2w5" [0e2a818b-a8af-48dd-b8ef-662a0a5699dc] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003944849s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (255.818405ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:12.914905  303137 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:12.915839  303137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:12.915861  303137 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:12.915868  303137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:12.916317  303137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:34:12.916705  303137 mustload.go:65] Loading cluster: addons-006674
	I1018 09:34:12.917149  303137 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:12.917173  303137 addons.go:606] checking whether the cluster is paused
	I1018 09:34:12.917366  303137 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:12.917406  303137 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:34:12.917941  303137 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:34:12.935371  303137 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:12.935433  303137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:34:12.952118  303137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:34:13.056045  303137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:13.056135  303137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:13.086066  303137 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:34:13.086086  303137 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:34:13.086091  303137 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:34:13.086095  303137 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:34:13.086098  303137 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:34:13.086102  303137 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:34:13.086105  303137 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:34:13.086108  303137 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:34:13.086111  303137 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:34:13.086126  303137 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:34:13.086130  303137 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:34:13.086133  303137 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:34:13.086136  303137 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:34:13.086140  303137 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:34:13.086147  303137 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:34:13.086153  303137 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:34:13.086156  303137 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:34:13.086160  303137 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:34:13.086163  303137 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:34:13.086166  303137 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:34:13.086170  303137 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:34:13.086177  303137 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:34:13.086181  303137 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:34:13.086184  303137 cri.go:89] found id: ""
	I1018 09:34:13.086233  303137 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:13.102602  303137 out.go:203] 
	W1018 09:34:13.105532  303137 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:13.105566  303137 out.go:285] * 
	* 
	W1018 09:34:13.112189  303137 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:13.115215  303137 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-006674 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-006674 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006674 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [227845a1-4c60-4c2e-96dc-bc4f74d57561] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [227845a1-4c60-4c2e-96dc-bc4f74d57561] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [227845a1-4c60-4c2e-96dc-bc4f74d57561] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003442519s
addons_test.go:967: (dbg) Run:  kubectl --context addons-006674 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 ssh "cat /opt/local-path-provisioner/pvc-a1742402-0986-435b-8326-e21304879a9e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-006674 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-006674 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (374.621564ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:07.541240  302997 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:07.542092  302997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:07.542130  302997 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:07.542152  302997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:07.542457  302997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:34:07.542796  302997 mustload.go:65] Loading cluster: addons-006674
	I1018 09:34:07.543229  302997 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:07.543269  302997 addons.go:606] checking whether the cluster is paused
	I1018 09:34:07.543407  302997 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:07.543444  302997 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:34:07.543934  302997 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:34:07.569736  302997 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:07.569792  302997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:34:07.609860  302997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:34:07.719980  302997 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:07.720058  302997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:07.795027  302997 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:34:07.795051  302997 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:34:07.795056  302997 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:34:07.795060  302997 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:34:07.795063  302997 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:34:07.795066  302997 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:34:07.795069  302997 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:34:07.795072  302997 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:34:07.795076  302997 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:34:07.795086  302997 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:34:07.795110  302997 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:34:07.795117  302997 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:34:07.795120  302997 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:34:07.795124  302997 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:34:07.795127  302997 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:34:07.795135  302997 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:34:07.795143  302997 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:34:07.795149  302997 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:34:07.795153  302997 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:34:07.795156  302997 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:34:07.795160  302997 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:34:07.795163  302997 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:34:07.795166  302997 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:34:07.795169  302997 cri.go:89] found id: ""
	I1018 09:34:07.795231  302997 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:07.831204  302997 out.go:203] 
	W1018 09:34:07.834507  302997 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:07.834595  302997 out.go:285] * 
	* 
	W1018 09:34:07.841091  302997 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:07.844805  302997 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-j658f" [a582f724-b46c-4377-b626-fcf59ae12980] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003961688s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (269.972536ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:33:59.105422  302616 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:59.106172  302616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:59.106186  302616 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:59.106191  302616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:59.106454  302616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:33:59.106757  302616 mustload.go:65] Loading cluster: addons-006674
	I1018 09:33:59.107162  302616 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:59.107183  302616 addons.go:606] checking whether the cluster is paused
	I1018 09:33:59.107288  302616 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:59.107309  302616 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:33:59.107750  302616 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:33:59.129392  302616 ssh_runner.go:195] Run: systemctl --version
	I1018 09:33:59.129460  302616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:33:59.150811  302616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:33:59.259764  302616 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:33:59.259867  302616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:33:59.288818  302616 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:33:59.288839  302616 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:33:59.288849  302616 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:33:59.288854  302616 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:33:59.288858  302616 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:33:59.288861  302616 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:33:59.288864  302616 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:33:59.288867  302616 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:33:59.288870  302616 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:33:59.288876  302616 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:33:59.288880  302616 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:33:59.288887  302616 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:33:59.288890  302616 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:33:59.288894  302616 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:33:59.288898  302616 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:33:59.288902  302616 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:33:59.288908  302616 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:33:59.288912  302616 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:33:59.288915  302616 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:33:59.288918  302616 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:33:59.288922  302616 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:33:59.288925  302616 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:33:59.288928  302616 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:33:59.288931  302616 cri.go:89] found id: ""
	I1018 09:33:59.288981  302616 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:33:59.303598  302616 out.go:203] 
	W1018 09:33:59.306432  302616 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:33:59.306465  302616 out.go:285] * 
	* 
	W1018 09:33:59.312792  302616 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:33:59.315681  302616 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-zncv4" [fb85697f-16d9-4b24-adba-90d388591c5c] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003694169s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006674 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006674 addons disable yakd --alsologtostderr -v=1: exit status 11 (297.405266ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:33:42.041436  302163 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:42.042914  302163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:42.042950  302163 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:42.042959  302163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:42.043313  302163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:33:42.043724  302163 mustload.go:65] Loading cluster: addons-006674
	I1018 09:33:42.044137  302163 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:42.044163  302163 addons.go:606] checking whether the cluster is paused
	I1018 09:33:42.044322  302163 config.go:182] Loaded profile config "addons-006674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:42.044348  302163 host.go:66] Checking if "addons-006674" exists ...
	I1018 09:33:42.045003  302163 cli_runner.go:164] Run: docker container inspect addons-006674 --format={{.State.Status}}
	I1018 09:33:42.065005  302163 ssh_runner.go:195] Run: systemctl --version
	I1018 09:33:42.065069  302163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006674
	I1018 09:33:42.084295  302163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/addons-006674/id_rsa Username:docker}
	I1018 09:33:42.194002  302163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:33:42.194207  302163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:33:42.237970  302163 cri.go:89] found id: "92482b56ebf7555fd05147ea25c2da176d87de2950820c3294d15ee1cae2b52d"
	I1018 09:33:42.238041  302163 cri.go:89] found id: "2668cbad9c190dab776247ab10f13e5d60a628e8326305be890dfb8023e10693"
	I1018 09:33:42.238061  302163 cri.go:89] found id: "3868b4ac74b7b2e804174805500883d50e014524523f2fcde2d34c8dae255aa3"
	I1018 09:33:42.238081  302163 cri.go:89] found id: "03c9c979a54ef881644e7a011bf46c6b361f61e955be32b471adedb4f1a228fa"
	I1018 09:33:42.238101  302163 cri.go:89] found id: "9d6a5e7844b19d29c3ee472ccc2ff323792accf04d9c7596b7995838d6ef2216"
	I1018 09:33:42.238135  302163 cri.go:89] found id: "025d3e64c63bd07bcb96631e06f0121dadeb4099055266bb9e87560dbbfdbe24"
	I1018 09:33:42.238153  302163 cri.go:89] found id: "e66aaf86ae284811e190a01db6cd600e4e81b9b038b9d7bdbf9e98398afc5f21"
	I1018 09:33:42.238171  302163 cri.go:89] found id: "fc5f92cc54e3945a4051248c76127d44b77cd5ad41e7680481bf12c73368473b"
	I1018 09:33:42.238193  302163 cri.go:89] found id: "4ed69c6d109cc4bbd324675d793ff430f77eb44fa1add8cd214ea977b38e369c"
	I1018 09:33:42.238231  302163 cri.go:89] found id: "442597e18340796966eb4234f5a955b362dab31d6337efdd6c0daac25ab74e5f"
	I1018 09:33:42.238258  302163 cri.go:89] found id: "54b6974a01255eb0d8fc4a27a1fff1addf769a358124f1111139388415ca2915"
	I1018 09:33:42.238277  302163 cri.go:89] found id: "d7a1cd7ba1844e20a9b434534d2ace9dc4b8410daae08b71ea72c8b4983d46d2"
	I1018 09:33:42.238295  302163 cri.go:89] found id: "1aec9843e6b35b7265e47196412e8358c0ebe00a6e40a979d385546804b7b85a"
	I1018 09:33:42.238313  302163 cri.go:89] found id: "fdaf99bae646f8f12090f49649ca8839c3524ff82dc518bbcc5c5bb5e5652ec8"
	I1018 09:33:42.238344  302163 cri.go:89] found id: "faa78827234374214c9f4cdd38747d941a5f322f9f1a6eb45f5a61fc89ba3085"
	I1018 09:33:42.238372  302163 cri.go:89] found id: "8ba1ab4998b33157d1c11d514e67020abe0f4da2b6dbd327b40e0e14cb877744"
	I1018 09:33:42.238404  302163 cri.go:89] found id: "7a4cd51451e0593916b537cc8613320fe84f5ad1b48e9c20ea79b02ebff89f08"
	I1018 09:33:42.238478  302163 cri.go:89] found id: "ee39b4a9868c7aec2142eb39fa00467bfd823efe9960710ad5f7a6d956fff7cc"
	I1018 09:33:42.238505  302163 cri.go:89] found id: "6864cc8c9035cc4900e88044a87d6126b379de12ae10cf15ebcbac3d449777c6"
	I1018 09:33:42.238525  302163 cri.go:89] found id: "7c7055bef3a7ada650e4d5f05a879413867ddb0163357c223b1f47a1b921b99f"
	I1018 09:33:42.238558  302163 cri.go:89] found id: "265553ed8d31e015701ccfb66997006c5a0cb46907fc11e25d67d2b5235e54e6"
	I1018 09:33:42.238603  302163 cri.go:89] found id: "218e3162f40e71fa576a92a613a2a422c61a439446739273ed3ec3b5b069db24"
	I1018 09:33:42.238634  302163 cri.go:89] found id: "ca64f5775c712d47b50002e93a4481eb4abcb5b068389fb2bfc06c1f7f58345c"
	I1018 09:33:42.238657  302163 cri.go:89] found id: ""
	I1018 09:33:42.238750  302163 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:33:42.259039  302163 out.go:203] 
	W1018 09:33:42.262840  302163 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:33:42.262932  302163 out.go:285] * 
	* 
	W1018 09:33:42.269797  302163 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:33:42.273466  302163 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-006674 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-679784 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-679784 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-qskk4" [5f50a6ef-a568-4552-80f9-41f0546d6341] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-679784 -n functional-679784
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 09:51:13.079423875 +0000 UTC m=+1256.665084080
functional_test.go:1645: (dbg) Run:  kubectl --context functional-679784 describe po hello-node-connect-7d85dfc575-qskk4 -n default
functional_test.go:1645: (dbg) kubectl --context functional-679784 describe po hello-node-connect-7d85dfc575-qskk4 -n default:
Name:             hello-node-connect-7d85dfc575-qskk4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-679784/192.168.49.2
Start Time:       Sat, 18 Oct 2025 09:41:12 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8phv5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8phv5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qskk4 to functional-679784
Normal   Pulling    7m8s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x22 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x22 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-679784 logs hello-node-connect-7d85dfc575-qskk4 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-679784 logs hello-node-connect-7d85dfc575-qskk4 -n default: exit status 1 (105.561403ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qskk4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-679784 logs hello-node-connect-7d85dfc575-qskk4 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-679784 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-qskk4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-679784/192.168.49.2
Start Time:       Sat, 18 Oct 2025 09:41:12 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8phv5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8phv5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qskk4 to functional-679784
Normal   Pulling    7m8s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x22 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x22 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-679784 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-679784 logs -l app=hello-node-connect: exit status 1 (91.829586ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-qskk4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-679784 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-679784 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.193.18
IPs:                      10.107.193.18
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30520/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-679784
helpers_test.go:243: (dbg) docker inspect functional-679784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43ee3d0a2820f4f7bd206c2a44d06d6e980483b8d027feeaeb5cdf8db8db1235",
	        "Created": "2025-10-18T09:38:10.771901921Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310987,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:38:10.848736448Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/43ee3d0a2820f4f7bd206c2a44d06d6e980483b8d027feeaeb5cdf8db8db1235/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43ee3d0a2820f4f7bd206c2a44d06d6e980483b8d027feeaeb5cdf8db8db1235/hostname",
	        "HostsPath": "/var/lib/docker/containers/43ee3d0a2820f4f7bd206c2a44d06d6e980483b8d027feeaeb5cdf8db8db1235/hosts",
	        "LogPath": "/var/lib/docker/containers/43ee3d0a2820f4f7bd206c2a44d06d6e980483b8d027feeaeb5cdf8db8db1235/43ee3d0a2820f4f7bd206c2a44d06d6e980483b8d027feeaeb5cdf8db8db1235-json.log",
	        "Name": "/functional-679784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-679784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-679784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43ee3d0a2820f4f7bd206c2a44d06d6e980483b8d027feeaeb5cdf8db8db1235",
	                "LowerDir": "/var/lib/docker/overlay2/50453eab93266737f6ea09c349032ebbd13d5e999f67abdd912c1d4d2606f0ab-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50453eab93266737f6ea09c349032ebbd13d5e999f67abdd912c1d4d2606f0ab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50453eab93266737f6ea09c349032ebbd13d5e999f67abdd912c1d4d2606f0ab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50453eab93266737f6ea09c349032ebbd13d5e999f67abdd912c1d4d2606f0ab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-679784",
	                "Source": "/var/lib/docker/volumes/functional-679784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-679784",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-679784",
	                "name.minikube.sigs.k8s.io": "functional-679784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b6bd4fc704a752a30f07049e850147c319bf9699be9ab16abc0fc288e3260af8",
	            "SandboxKey": "/var/run/docker/netns/b6bd4fc704a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-679784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:78:50:74:78:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1a593c6b846b711747182ee57872d4c5910f5cf8025636483f1e4819532e89b",
	                    "EndpointID": "1c4a911ae80d41b9aebd25c61fa18a3cc6a2f504813147a4a47f9de0f7952a39",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-679784",
	                        "43ee3d0a2820"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-679784 -n functional-679784
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 logs -n 25: (1.452336624s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-679784 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:40 UTC │ 18 Oct 25 09:40 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 18 Oct 25 09:40 UTC │ 18 Oct 25 09:40 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 18 Oct 25 09:40 UTC │ 18 Oct 25 09:40 UTC │
	│ kubectl │ functional-679784 kubectl -- --context functional-679784 get pods                                                          │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:40 UTC │ 18 Oct 25 09:40 UTC │
	│ start   │ -p functional-679784 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:40 UTC │ 18 Oct 25 09:40 UTC │
	│ service │ invalid-svc -p functional-679784                                                                                           │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ config  │ functional-679784 config unset cpus                                                                                        │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ cp      │ functional-679784 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ config  │ functional-679784 config get cpus                                                                                          │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ config  │ functional-679784 config set cpus 2                                                                                        │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ config  │ functional-679784 config get cpus                                                                                          │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ config  │ functional-679784 config unset cpus                                                                                        │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ config  │ functional-679784 config get cpus                                                                                          │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ functional-679784 ssh -n functional-679784 sudo cat /home/docker/cp-test.txt                                               │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ ssh     │ functional-679784 ssh echo hello                                                                                           │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ cp      │ functional-679784 cp functional-679784:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4151707404/001/cp-test.txt │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ ssh     │ functional-679784 ssh cat /etc/hostname                                                                                    │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ ssh     │ functional-679784 ssh -n functional-679784 sudo cat /home/docker/cp-test.txt                                               │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ tunnel  │ functional-679784 tunnel --alsologtostderr                                                                                 │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ tunnel  │ functional-679784 tunnel --alsologtostderr                                                                                 │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ cp      │ functional-679784 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ ssh     │ functional-679784 ssh -n functional-679784 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ tunnel  │ functional-679784 tunnel --alsologtostderr                                                                                 │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ addons  │ functional-679784 addons list                                                                                              │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ addons  │ functional-679784 addons list -o json                                                                                      │ functional-679784 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:40:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:40:02.138624  315151 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:40:02.138799  315151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:40:02.138804  315151 out.go:374] Setting ErrFile to fd 2...
	I1018 09:40:02.138807  315151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:40:02.139064  315151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:40:02.139481  315151 out.go:368] Setting JSON to false
	I1018 09:40:02.140423  315151 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4953,"bootTime":1760775450,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 09:40:02.140479  315151 start.go:141] virtualization:  
	I1018 09:40:02.143930  315151 out.go:179] * [functional-679784] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:40:02.146841  315151 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:40:02.146940  315151 notify.go:220] Checking for updates...
	I1018 09:40:02.152768  315151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:40:02.155732  315151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:40:02.158589  315151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 09:40:02.161453  315151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:40:02.164413  315151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:40:02.167796  315151 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:40:02.167896  315151 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:40:02.206078  315151 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:40:02.206200  315151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:40:02.269500  315151 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-18 09:40:02.259776375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:40:02.269601  315151 docker.go:318] overlay module found
	I1018 09:40:02.272845  315151 out.go:179] * Using the docker driver based on existing profile
	I1018 09:40:02.275696  315151 start.go:305] selected driver: docker
	I1018 09:40:02.275705  315151 start.go:925] validating driver "docker" against &{Name:functional-679784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-679784 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:40:02.275811  315151 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:40:02.275948  315151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:40:02.335082  315151 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-18 09:40:02.325694274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:40:02.335543  315151 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:40:02.335576  315151 cni.go:84] Creating CNI manager for ""
	I1018 09:40:02.335633  315151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:40:02.335678  315151 start.go:349] cluster config:
	{Name:functional-679784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-679784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:40:02.338972  315151 out.go:179] * Starting "functional-679784" primary control-plane node in "functional-679784" cluster
	I1018 09:40:02.341833  315151 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:40:02.344867  315151 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:40:02.347666  315151 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:40:02.347715  315151 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:40:02.347723  315151 cache.go:58] Caching tarball of preloaded images
	I1018 09:40:02.347745  315151 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:40:02.347819  315151 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:40:02.347828  315151 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:40:02.347950  315151 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/config.json ...
	I1018 09:40:02.367852  315151 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:40:02.367864  315151 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:40:02.367894  315151 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:40:02.367916  315151 start.go:360] acquireMachinesLock for functional-679784: {Name:mk58609b233e9314fb15767bb4cbc6a29ce74d5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:40:02.367976  315151 start.go:364] duration metric: took 43.473µs to acquireMachinesLock for "functional-679784"
	I1018 09:40:02.367994  315151 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:40:02.367999  315151 fix.go:54] fixHost starting: 
	I1018 09:40:02.368245  315151 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
	I1018 09:40:02.385122  315151 fix.go:112] recreateIfNeeded on functional-679784: state=Running err=<nil>
	W1018 09:40:02.385142  315151 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:40:02.388356  315151 out.go:252] * Updating the running docker "functional-679784" container ...
	I1018 09:40:02.388378  315151 machine.go:93] provisionDockerMachine start ...
	I1018 09:40:02.388453  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:02.405457  315151 main.go:141] libmachine: Using SSH client type: native
	I1018 09:40:02.405781  315151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:40:02.405788  315151 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:40:02.552778  315151 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-679784
	
	I1018 09:40:02.552800  315151 ubuntu.go:182] provisioning hostname "functional-679784"
	I1018 09:40:02.552871  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:02.571727  315151 main.go:141] libmachine: Using SSH client type: native
	I1018 09:40:02.572012  315151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:40:02.572021  315151 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-679784 && echo "functional-679784" | sudo tee /etc/hostname
	I1018 09:40:02.727002  315151 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-679784
	
	I1018 09:40:02.727077  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:02.745889  315151 main.go:141] libmachine: Using SSH client type: native
	I1018 09:40:02.746193  315151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:40:02.746207  315151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-679784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-679784/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-679784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:40:02.897494  315151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:40:02.897510  315151 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 09:40:02.897530  315151 ubuntu.go:190] setting up certificates
	I1018 09:40:02.897539  315151 provision.go:84] configureAuth start
	I1018 09:40:02.897599  315151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-679784
	I1018 09:40:02.915088  315151 provision.go:143] copyHostCerts
	I1018 09:40:02.915142  315151 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 09:40:02.915168  315151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 09:40:02.915245  315151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 09:40:02.915342  315151 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 09:40:02.915346  315151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 09:40:02.915369  315151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 09:40:02.915421  315151 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 09:40:02.915427  315151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 09:40:02.915522  315151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 09:40:02.915583  315151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.functional-679784 san=[127.0.0.1 192.168.49.2 functional-679784 localhost minikube]
	I1018 09:40:03.266254  315151 provision.go:177] copyRemoteCerts
	I1018 09:40:03.266306  315151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:40:03.266344  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:03.282895  315151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
	I1018 09:40:03.388997  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:40:03.408812  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:40:03.427474  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:40:03.445034  315151 provision.go:87] duration metric: took 547.472704ms to configureAuth
	I1018 09:40:03.445051  315151 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:40:03.445341  315151 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:40:03.445444  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:03.461915  315151 main.go:141] libmachine: Using SSH client type: native
	I1018 09:40:03.462210  315151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:40:03.462224  315151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:40:08.846202  315151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:40:08.846213  315151 machine.go:96] duration metric: took 6.457828858s to provisionDockerMachine
	I1018 09:40:08.846223  315151 start.go:293] postStartSetup for "functional-679784" (driver="docker")
	I1018 09:40:08.846233  315151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:40:08.846303  315151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:40:08.846345  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:08.864879  315151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
	I1018 09:40:08.969654  315151 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:40:08.973439  315151 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:40:08.973456  315151 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:40:08.973470  315151 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 09:40:08.973525  315151 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 09:40:08.973602  315151 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 09:40:08.973679  315151 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/test/nested/copy/295193/hosts -> hosts in /etc/test/nested/copy/295193
	I1018 09:40:08.973721  315151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/295193
	I1018 09:40:08.982647  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 09:40:09.002088  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/test/nested/copy/295193/hosts --> /etc/test/nested/copy/295193/hosts (40 bytes)
	I1018 09:40:09.019822  315151 start.go:296] duration metric: took 173.584193ms for postStartSetup
	I1018 09:40:09.019891  315151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:40:09.019940  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:09.037813  315151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
	I1018 09:40:09.138560  315151 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:40:09.143495  315151 fix.go:56] duration metric: took 6.775488284s for fixHost
	I1018 09:40:09.143510  315151 start.go:83] releasing machines lock for "functional-679784", held for 6.775527088s
	I1018 09:40:09.143580  315151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-679784
	I1018 09:40:09.160552  315151 ssh_runner.go:195] Run: cat /version.json
	I1018 09:40:09.160566  315151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:40:09.160594  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:09.160615  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:09.177569  315151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
	I1018 09:40:09.182806  315151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
	I1018 09:40:09.366256  315151 ssh_runner.go:195] Run: systemctl --version
	I1018 09:40:09.373330  315151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:40:09.411349  315151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:40:09.415731  315151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:40:09.415798  315151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:40:09.423635  315151 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:40:09.423649  315151 start.go:495] detecting cgroup driver to use...
	I1018 09:40:09.423679  315151 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:40:09.423724  315151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:40:09.440199  315151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:40:09.453751  315151 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:40:09.453809  315151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:40:09.469889  315151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:40:09.483491  315151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:40:09.626896  315151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:40:09.770309  315151 docker.go:234] disabling docker service ...
	I1018 09:40:09.770368  315151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:40:09.785151  315151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:40:09.798766  315151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:40:09.928614  315151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:40:10.059073  315151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:40:10.074783  315151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:40:10.091313  315151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:40:10.091373  315151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:40:10.100866  315151 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:40:10.100926  315151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:40:10.112172  315151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:40:10.122021  315151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:40:10.131756  315151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:40:10.140968  315151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:40:10.151283  315151 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:40:10.160419  315151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:40:10.169729  315151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:40:10.177843  315151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:40:10.185395  315151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:40:10.316030  315151 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:40:16.236334  315151 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.920279141s)
	I1018 09:40:16.236350  315151 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:40:16.236401  315151 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:40:16.240178  315151 start.go:563] Will wait 60s for crictl version
	I1018 09:40:16.240240  315151 ssh_runner.go:195] Run: which crictl
	I1018 09:40:16.243814  315151 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:40:16.270663  315151 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:40:16.270738  315151 ssh_runner.go:195] Run: crio --version
	I1018 09:40:16.299722  315151 ssh_runner.go:195] Run: crio --version
	I1018 09:40:16.352867  315151 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:40:16.356151  315151 cli_runner.go:164] Run: docker network inspect functional-679784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:40:16.380827  315151 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 09:40:16.389809  315151 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1018 09:40:16.392932  315151 kubeadm.go:883] updating cluster {Name:functional-679784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-679784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:40:16.393067  315151 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:40:16.393135  315151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:40:16.487142  315151 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:40:16.487160  315151 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:40:16.487217  315151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:40:16.576126  315151 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:40:16.576137  315151 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:40:16.576144  315151 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1018 09:40:16.576253  315151 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-679784 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-679784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:40:16.576336  315151 ssh_runner.go:195] Run: crio config
	I1018 09:40:16.704040  315151 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1018 09:40:16.704064  315151 cni.go:84] Creating CNI manager for ""
	I1018 09:40:16.704083  315151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:40:16.704099  315151 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:40:16.704120  315151 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-679784 NodeName:functional-679784 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:40:16.704265  315151 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-679784"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:40:16.704348  315151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:40:16.719602  315151 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:40:16.719669  315151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:40:16.739804  315151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:40:16.760918  315151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:40:16.779917  315151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1018 09:40:16.799216  315151 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:40:16.803680  315151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:40:17.032913  315151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:40:17.048635  315151 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784 for IP: 192.168.49.2
	I1018 09:40:17.048647  315151 certs.go:195] generating shared ca certs ...
	I1018 09:40:17.048661  315151 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:40:17.048845  315151 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 09:40:17.048887  315151 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 09:40:17.048893  315151 certs.go:257] generating profile certs ...
	I1018 09:40:17.049030  315151 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.key
	I1018 09:40:17.049086  315151 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/apiserver.key.a54fbadf
	I1018 09:40:17.049132  315151 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/proxy-client.key
	I1018 09:40:17.049284  315151 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 09:40:17.049314  315151 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 09:40:17.049321  315151 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:40:17.049352  315151 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:40:17.049375  315151 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:40:17.049396  315151 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 09:40:17.049462  315151 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 09:40:17.050163  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:40:17.078442  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:40:17.111117  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:40:17.140660  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 09:40:17.174434  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:40:17.207399  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:40:17.234428  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:40:17.260492  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:40:17.288081  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:40:17.316043  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 09:40:17.346223  315151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 09:40:17.374330  315151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:40:17.396099  315151 ssh_runner.go:195] Run: openssl version
	I1018 09:40:17.410183  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:40:17.422652  315151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:40:17.426958  315151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:40:17.427024  315151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:40:17.477488  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:40:17.486142  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 09:40:17.499241  315151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 09:40:17.503426  315151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 09:40:17.503482  315151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 09:40:17.568364  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 09:40:17.582589  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 09:40:17.596148  315151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 09:40:17.603617  315151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 09:40:17.603673  315151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 09:40:17.669921  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:40:17.677949  315151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:40:17.683948  315151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:40:17.738282  315151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:40:17.790061  315151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:40:17.842840  315151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:40:17.890933  315151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:40:17.936331  315151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:40:17.980370  315151 kubeadm.go:400] StartCluster: {Name:functional-679784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-679784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:40:17.980447  315151 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:40:17.980520  315151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:40:18.012249  315151 cri.go:89] found id: "36417b8a88584c6095705f113f34e033a95805adc55e4a6e74366417a77d1aac"
	I1018 09:40:18.012261  315151 cri.go:89] found id: "dcfa0001b0cadb86c2e79b84c335a5c92c2d7b886570a069f1e5198d03021f3e"
	I1018 09:40:18.012264  315151 cri.go:89] found id: "65f1b47baccba3335a979907c932621d8fa52381f95e598287aaf2fa0d46fc9e"
	I1018 09:40:18.012266  315151 cri.go:89] found id: "c1c64c61ee1ba669c4676cbb51383b0e6ee6197c0335f6bc4f32db466e0be2d1"
	I1018 09:40:18.012268  315151 cri.go:89] found id: "035b678db6aba9a8001a402b09bdfdb5f3e16d6d4686ab11726ccaf002addf46"
	I1018 09:40:18.012272  315151 cri.go:89] found id: "23a62ae1afbda178c3417a2f8797b5cb8983fe2f5de533f336585a2ba7c779c2"
	I1018 09:40:18.012274  315151 cri.go:89] found id: "19322c3978c6cb03fdfd21e075745d03c9293c8d0060edf723c562504cb18742"
	I1018 09:40:18.012277  315151 cri.go:89] found id: "e17b745e8cbcee00bdd8e66b5d7e60005a026d4499b6419aa28efc24ef1facb8"
	I1018 09:40:18.012279  315151 cri.go:89] found id: "ac3ad01759abd75176ca74a567f0f1b787829571bd94bc6907c67ca0758241a7"
	I1018 09:40:18.012286  315151 cri.go:89] found id: "85fb24aa0b56bcfeaa9134a7abf8aa713052925834fbadeb7dbc152b3f4e0918"
	I1018 09:40:18.012288  315151 cri.go:89] found id: "2aa4bbf41d7e7f531d7b76f617d5fb03264597556b0acc6596fde10e45b96d0b"
	I1018 09:40:18.012290  315151 cri.go:89] found id: "eba573c6bed29c89cf455f52951136b228c503f45639b6e5d734907ae00f89d3"
	I1018 09:40:18.012292  315151 cri.go:89] found id: "ab4de7237e1b1879a7ac20a0e22dd89137c7b04afed184a1b940e6936938bbdc"
	I1018 09:40:18.012296  315151 cri.go:89] found id: "566e939fea3b47c4290a2c40acfc5860669e9e631a95b3d388dfa81ce2f822a2"
	I1018 09:40:18.012298  315151 cri.go:89] found id: ""
	I1018 09:40:18.012346  315151 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:40:18.023225  315151 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:40:18Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:40:18.023299  315151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:40:18.031245  315151 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:40:18.031254  315151 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:40:18.031301  315151 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:40:18.038844  315151 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:40:18.039377  315151 kubeconfig.go:125] found "functional-679784" server: "https://192.168.49.2:8441"
	I1018 09:40:18.040650  315151 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:40:18.050538  315151 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-18 09:38:17.974615797 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-18 09:40:16.790576459 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1018 09:40:18.050547  315151 kubeadm.go:1160] stopping kube-system containers ...
	I1018 09:40:18.050560  315151 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1018 09:40:18.050621  315151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:40:18.088162  315151 cri.go:89] found id: "36417b8a88584c6095705f113f34e033a95805adc55e4a6e74366417a77d1aac"
	I1018 09:40:18.088173  315151 cri.go:89] found id: "dcfa0001b0cadb86c2e79b84c335a5c92c2d7b886570a069f1e5198d03021f3e"
	I1018 09:40:18.088177  315151 cri.go:89] found id: "65f1b47baccba3335a979907c932621d8fa52381f95e598287aaf2fa0d46fc9e"
	I1018 09:40:18.088179  315151 cri.go:89] found id: "c1c64c61ee1ba669c4676cbb51383b0e6ee6197c0335f6bc4f32db466e0be2d1"
	I1018 09:40:18.088182  315151 cri.go:89] found id: "035b678db6aba9a8001a402b09bdfdb5f3e16d6d4686ab11726ccaf002addf46"
	I1018 09:40:18.088190  315151 cri.go:89] found id: "23a62ae1afbda178c3417a2f8797b5cb8983fe2f5de533f336585a2ba7c779c2"
	I1018 09:40:18.088193  315151 cri.go:89] found id: "19322c3978c6cb03fdfd21e075745d03c9293c8d0060edf723c562504cb18742"
	I1018 09:40:18.088195  315151 cri.go:89] found id: "e17b745e8cbcee00bdd8e66b5d7e60005a026d4499b6419aa28efc24ef1facb8"
	I1018 09:40:18.088207  315151 cri.go:89] found id: "ac3ad01759abd75176ca74a567f0f1b787829571bd94bc6907c67ca0758241a7"
	I1018 09:40:18.088213  315151 cri.go:89] found id: "85fb24aa0b56bcfeaa9134a7abf8aa713052925834fbadeb7dbc152b3f4e0918"
	I1018 09:40:18.088215  315151 cri.go:89] found id: "2aa4bbf41d7e7f531d7b76f617d5fb03264597556b0acc6596fde10e45b96d0b"
	I1018 09:40:18.088217  315151 cri.go:89] found id: "eba573c6bed29c89cf455f52951136b228c503f45639b6e5d734907ae00f89d3"
	I1018 09:40:18.088219  315151 cri.go:89] found id: "ab4de7237e1b1879a7ac20a0e22dd89137c7b04afed184a1b940e6936938bbdc"
	I1018 09:40:18.088221  315151 cri.go:89] found id: "566e939fea3b47c4290a2c40acfc5860669e9e631a95b3d388dfa81ce2f822a2"
	I1018 09:40:18.088223  315151 cri.go:89] found id: ""
	I1018 09:40:18.088228  315151 cri.go:252] Stopping containers: [36417b8a88584c6095705f113f34e033a95805adc55e4a6e74366417a77d1aac dcfa0001b0cadb86c2e79b84c335a5c92c2d7b886570a069f1e5198d03021f3e 65f1b47baccba3335a979907c932621d8fa52381f95e598287aaf2fa0d46fc9e c1c64c61ee1ba669c4676cbb51383b0e6ee6197c0335f6bc4f32db466e0be2d1 035b678db6aba9a8001a402b09bdfdb5f3e16d6d4686ab11726ccaf002addf46 23a62ae1afbda178c3417a2f8797b5cb8983fe2f5de533f336585a2ba7c779c2 19322c3978c6cb03fdfd21e075745d03c9293c8d0060edf723c562504cb18742 e17b745e8cbcee00bdd8e66b5d7e60005a026d4499b6419aa28efc24ef1facb8 ac3ad01759abd75176ca74a567f0f1b787829571bd94bc6907c67ca0758241a7 85fb24aa0b56bcfeaa9134a7abf8aa713052925834fbadeb7dbc152b3f4e0918 2aa4bbf41d7e7f531d7b76f617d5fb03264597556b0acc6596fde10e45b96d0b eba573c6bed29c89cf455f52951136b228c503f45639b6e5d734907ae00f89d3 ab4de7237e1b1879a7ac20a0e22dd89137c7b04afed184a1b940e6936938bbdc 566e939fea3b47c4290a2c40acfc5860669e9e631a95b3d388dfa81ce2f822a2]
	I1018 09:40:18.088287  315151 ssh_runner.go:195] Run: which crictl
	I1018 09:40:18.092396  315151 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 36417b8a88584c6095705f113f34e033a95805adc55e4a6e74366417a77d1aac dcfa0001b0cadb86c2e79b84c335a5c92c2d7b886570a069f1e5198d03021f3e 65f1b47baccba3335a979907c932621d8fa52381f95e598287aaf2fa0d46fc9e c1c64c61ee1ba669c4676cbb51383b0e6ee6197c0335f6bc4f32db466e0be2d1 035b678db6aba9a8001a402b09bdfdb5f3e16d6d4686ab11726ccaf002addf46 23a62ae1afbda178c3417a2f8797b5cb8983fe2f5de533f336585a2ba7c779c2 19322c3978c6cb03fdfd21e075745d03c9293c8d0060edf723c562504cb18742 e17b745e8cbcee00bdd8e66b5d7e60005a026d4499b6419aa28efc24ef1facb8 ac3ad01759abd75176ca74a567f0f1b787829571bd94bc6907c67ca0758241a7 85fb24aa0b56bcfeaa9134a7abf8aa713052925834fbadeb7dbc152b3f4e0918 2aa4bbf41d7e7f531d7b76f617d5fb03264597556b0acc6596fde10e45b96d0b eba573c6bed29c89cf455f52951136b228c503f45639b6e5d734907ae00f89d3 ab4de7237e1b1879a7ac20a0e22dd89137c7b04afed184a1b940e6936938bbdc 566e939fea3b47c4290a2c40acfc5860669e9e631a95b3d388dfa81ce2f822a2
	I1018 09:40:29.755518  315151 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 36417b8a88584c6095705f113f34e033a95805adc55e4a6e74366417a77d1aac dcfa0001b0cadb86c2e79b84c335a5c92c2d7b886570a069f1e5198d03021f3e 65f1b47baccba3335a979907c932621d8fa52381f95e598287aaf2fa0d46fc9e c1c64c61ee1ba669c4676cbb51383b0e6ee6197c0335f6bc4f32db466e0be2d1 035b678db6aba9a8001a402b09bdfdb5f3e16d6d4686ab11726ccaf002addf46 23a62ae1afbda178c3417a2f8797b5cb8983fe2f5de533f336585a2ba7c779c2 19322c3978c6cb03fdfd21e075745d03c9293c8d0060edf723c562504cb18742 e17b745e8cbcee00bdd8e66b5d7e60005a026d4499b6419aa28efc24ef1facb8 ac3ad01759abd75176ca74a567f0f1b787829571bd94bc6907c67ca0758241a7 85fb24aa0b56bcfeaa9134a7abf8aa713052925834fbadeb7dbc152b3f4e0918 2aa4bbf41d7e7f531d7b76f617d5fb03264597556b0acc6596fde10e45b96d0b eba573c6bed29c89cf455f52951136b228c503f45639b6e5d734907ae00f89d3 ab4de7237e1b1879a7ac20a0e22dd89137c7b04afed184a1b940e6936938bbdc 566e939fea3b47c4290a2c40acfc5860669e9e631a95b3d388dfa81ce2f822a2:
(11.663082965s)
	I1018 09:40:29.755585  315151 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 09:40:29.872467  315151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:40:29.880513  315151 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct 18 09:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 18 09:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 18 09:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 18 09:38 /etc/kubernetes/scheduler.conf
	
	I1018 09:40:29.880571  315151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1018 09:40:29.888776  315151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1018 09:40:29.896760  315151 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:40:29.896815  315151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:40:29.904622  315151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1018 09:40:29.912628  315151 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:40:29.912694  315151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:40:29.920220  315151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1018 09:40:29.927753  315151 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:40:29.927810  315151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:40:29.935288  315151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:40:29.943275  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:40:29.988907  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:40:33.137113  315151 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.148181084s)
	I1018 09:40:33.137172  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:40:33.363049  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:40:33.436975  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:40:33.501564  315151 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:40:33.501632  315151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:40:34.002383  315151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:40:34.501964  315151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:40:34.521609  315151 api_server.go:72] duration metric: took 1.020054644s to wait for apiserver process to appear ...
	I1018 09:40:34.521623  315151 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:40:34.521640  315151 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 09:40:38.042176  315151 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:40:38.042192  315151 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:40:38.042205  315151 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 09:40:38.161270  315151 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:40:38.161284  315151 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:40:38.522717  315151 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 09:40:38.531706  315151 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:40:38.531729  315151 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:40:39.022357  315151 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 09:40:39.046088  315151 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:40:39.046109  315151 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:40:39.522486  315151 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 09:40:39.533745  315151 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1018 09:40:39.547844  315151 api_server.go:141] control plane version: v1.34.1
	I1018 09:40:39.547863  315151 api_server.go:131] duration metric: took 5.026233386s to wait for apiserver health ...
	I1018 09:40:39.547871  315151 cni.go:84] Creating CNI manager for ""
	I1018 09:40:39.547877  315151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:40:39.552024  315151 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:40:39.555064  315151 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:40:39.559498  315151 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:40:39.559508  315151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:40:39.573918  315151 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:40:40.080395  315151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:40:40.090951  315151 system_pods.go:59] 8 kube-system pods found
	I1018 09:40:40.090972  315151 system_pods.go:61] "coredns-66bc5c9577-g9mpj" [2d817e25-465e-47b3-9f35-4ed4ecd20284] Running
	I1018 09:40:40.090981  315151 system_pods.go:61] "etcd-functional-679784" [105d4f1c-9c26-47e4-b465-ed4f3f1dbe97] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:40:40.090985  315151 system_pods.go:61] "kindnet-sw9kp" [9ae93f06-c6ca-4a90-b69b-0989025dded0] Running
	I1018 09:40:40.090992  315151 system_pods.go:61] "kube-apiserver-functional-679784" [e7996211-7905-4cc9-af43-f1ae99e2f7d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:40:40.090998  315151 system_pods.go:61] "kube-controller-manager-functional-679784" [9202f362-62f0-4404-b8f9-a882bce9026c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:40:40.091003  315151 system_pods.go:61] "kube-proxy-j4hbt" [ecc15a6c-b49d-4aed-993c-9be14aef1164] Running
	I1018 09:40:40.091009  315151 system_pods.go:61] "kube-scheduler-functional-679784" [5d064c97-a626-4d8c-9229-f05a9255bb9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:40:40.091012  315151 system_pods.go:61] "storage-provisioner" [e66fbf6a-302e-4c8e-8978-fa38d6b51354] Running
	I1018 09:40:40.091020  315151 system_pods.go:74] duration metric: took 10.611597ms to wait for pod list to return data ...
	I1018 09:40:40.091027  315151 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:40:40.095191  315151 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:40:40.095233  315151 node_conditions.go:123] node cpu capacity is 2
	I1018 09:40:40.095246  315151 node_conditions.go:105] duration metric: took 4.214977ms to run NodePressure ...
	I1018 09:40:40.095376  315151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:40:40.352506  315151 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 09:40:40.360172  315151 kubeadm.go:743] kubelet initialised
	I1018 09:40:40.360184  315151 kubeadm.go:744] duration metric: took 7.66447ms waiting for restarted kubelet to initialise ...
	I1018 09:40:40.360200  315151 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:40:40.369996  315151 ops.go:34] apiserver oom_adj: -16
	I1018 09:40:40.370008  315151 kubeadm.go:601] duration metric: took 22.33874892s to restartPrimaryControlPlane
	I1018 09:40:40.370015  315151 kubeadm.go:402] duration metric: took 22.389656522s to StartCluster
	I1018 09:40:40.370029  315151 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:40:40.370089  315151 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:40:40.370704  315151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:40:40.370924  315151 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:40:40.371236  315151 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:40:40.371292  315151 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:40:40.371413  315151 addons.go:69] Setting storage-provisioner=true in profile "functional-679784"
	I1018 09:40:40.371422  315151 addons.go:69] Setting default-storageclass=true in profile "functional-679784"
	I1018 09:40:40.371427  315151 addons.go:238] Setting addon storage-provisioner=true in "functional-679784"
	W1018 09:40:40.371432  315151 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:40:40.371436  315151 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-679784"
	I1018 09:40:40.371454  315151 host.go:66] Checking if "functional-679784" exists ...
	I1018 09:40:40.371757  315151 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
	I1018 09:40:40.372124  315151 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
	I1018 09:40:40.374173  315151 out.go:179] * Verifying Kubernetes components...
	I1018 09:40:40.377178  315151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:40:40.399239  315151 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:40:40.402473  315151 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:40:40.402488  315151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:40:40.402554  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:40.422638  315151 addons.go:238] Setting addon default-storageclass=true in "functional-679784"
	W1018 09:40:40.422649  315151 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:40:40.422681  315151 host.go:66] Checking if "functional-679784" exists ...
	I1018 09:40:40.428762  315151 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
	I1018 09:40:40.431861  315151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
	I1018 09:40:40.459020  315151 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:40:40.459034  315151 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:40:40.459099  315151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:40:40.493305  315151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
	I1018 09:40:40.561437  315151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:40:40.645397  315151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:40:40.682328  315151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:40:41.369840  315151 node_ready.go:35] waiting up to 6m0s for node "functional-679784" to be "Ready" ...
	I1018 09:40:41.373250  315151 node_ready.go:49] node "functional-679784" is "Ready"
	I1018 09:40:41.373267  315151 node_ready.go:38] duration metric: took 3.409091ms for node "functional-679784" to be "Ready" ...
	I1018 09:40:41.373278  315151 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:40:41.373336  315151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:40:41.381422  315151 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:40:41.384382  315151 addons.go:514] duration metric: took 1.013074613s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:40:41.387606  315151 api_server.go:72] duration metric: took 1.016656953s to wait for apiserver process to appear ...
	I1018 09:40:41.387619  315151 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:40:41.387635  315151 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 09:40:41.399303  315151 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1018 09:40:41.400349  315151 api_server.go:141] control plane version: v1.34.1
	I1018 09:40:41.400363  315151 api_server.go:131] duration metric: took 12.73884ms to wait for apiserver health ...
	I1018 09:40:41.400371  315151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:40:41.403406  315151 system_pods.go:59] 8 kube-system pods found
	I1018 09:40:41.403420  315151 system_pods.go:61] "coredns-66bc5c9577-g9mpj" [2d817e25-465e-47b3-9f35-4ed4ecd20284] Running
	I1018 09:40:41.403429  315151 system_pods.go:61] "etcd-functional-679784" [105d4f1c-9c26-47e4-b465-ed4f3f1dbe97] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:40:41.403437  315151 system_pods.go:61] "kindnet-sw9kp" [9ae93f06-c6ca-4a90-b69b-0989025dded0] Running
	I1018 09:40:41.403444  315151 system_pods.go:61] "kube-apiserver-functional-679784" [e7996211-7905-4cc9-af43-f1ae99e2f7d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:40:41.403449  315151 system_pods.go:61] "kube-controller-manager-functional-679784" [9202f362-62f0-4404-b8f9-a882bce9026c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:40:41.403453  315151 system_pods.go:61] "kube-proxy-j4hbt" [ecc15a6c-b49d-4aed-993c-9be14aef1164] Running
	I1018 09:40:41.403459  315151 system_pods.go:61] "kube-scheduler-functional-679784" [5d064c97-a626-4d8c-9229-f05a9255bb9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:40:41.403462  315151 system_pods.go:61] "storage-provisioner" [e66fbf6a-302e-4c8e-8978-fa38d6b51354] Running
	I1018 09:40:41.403468  315151 system_pods.go:74] duration metric: took 3.091148ms to wait for pod list to return data ...
	I1018 09:40:41.403475  315151 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:40:41.406407  315151 default_sa.go:45] found service account: "default"
	I1018 09:40:41.406431  315151 default_sa.go:55] duration metric: took 2.951188ms for default service account to be created ...
	I1018 09:40:41.406440  315151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:40:41.409438  315151 system_pods.go:86] 8 kube-system pods found
	I1018 09:40:41.409454  315151 system_pods.go:89] "coredns-66bc5c9577-g9mpj" [2d817e25-465e-47b3-9f35-4ed4ecd20284] Running
	I1018 09:40:41.409462  315151 system_pods.go:89] "etcd-functional-679784" [105d4f1c-9c26-47e4-b465-ed4f3f1dbe97] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:40:41.409466  315151 system_pods.go:89] "kindnet-sw9kp" [9ae93f06-c6ca-4a90-b69b-0989025dded0] Running
	I1018 09:40:41.409471  315151 system_pods.go:89] "kube-apiserver-functional-679784" [e7996211-7905-4cc9-af43-f1ae99e2f7d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:40:41.409486  315151 system_pods.go:89] "kube-controller-manager-functional-679784" [9202f362-62f0-4404-b8f9-a882bce9026c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:40:41.409489  315151 system_pods.go:89] "kube-proxy-j4hbt" [ecc15a6c-b49d-4aed-993c-9be14aef1164] Running
	I1018 09:40:41.409494  315151 system_pods.go:89] "kube-scheduler-functional-679784" [5d064c97-a626-4d8c-9229-f05a9255bb9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:40:41.409497  315151 system_pods.go:89] "storage-provisioner" [e66fbf6a-302e-4c8e-8978-fa38d6b51354] Running
	I1018 09:40:41.409503  315151 system_pods.go:126] duration metric: took 3.058638ms to wait for k8s-apps to be running ...
	I1018 09:40:41.409510  315151 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:40:41.409568  315151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:40:41.424364  315151 system_svc.go:56] duration metric: took 14.844338ms WaitForService to wait for kubelet
	I1018 09:40:41.424382  315151 kubeadm.go:586] duration metric: took 1.053437963s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:40:41.424399  315151 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:40:41.427108  315151 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:40:41.427123  315151 node_conditions.go:123] node cpu capacity is 2
	I1018 09:40:41.427133  315151 node_conditions.go:105] duration metric: took 2.730348ms to run NodePressure ...
	I1018 09:40:41.427145  315151 start.go:241] waiting for startup goroutines ...
	I1018 09:40:41.427151  315151 start.go:246] waiting for cluster config update ...
	I1018 09:40:41.427161  315151 start.go:255] writing updated cluster config ...
	I1018 09:40:41.427488  315151 ssh_runner.go:195] Run: rm -f paused
	I1018 09:40:41.431656  315151 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:40:41.435422  315151 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-g9mpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:41.440814  315151 pod_ready.go:94] pod "coredns-66bc5c9577-g9mpj" is "Ready"
	I1018 09:40:41.440829  315151 pod_ready.go:86] duration metric: took 5.3915ms for pod "coredns-66bc5c9577-g9mpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:41.443691  315151 pod_ready.go:83] waiting for pod "etcd-functional-679784" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:40:43.450221  315151 pod_ready.go:104] pod "etcd-functional-679784" is not "Ready", error: <nil>
	W1018 09:40:45.450870  315151 pod_ready.go:104] pod "etcd-functional-679784" is not "Ready", error: <nil>
	W1018 09:40:47.950546  315151 pod_ready.go:104] pod "etcd-functional-679784" is not "Ready", error: <nil>
	W1018 09:40:50.449810  315151 pod_ready.go:104] pod "etcd-functional-679784" is not "Ready", error: <nil>
	W1018 09:40:52.948532  315151 pod_ready.go:104] pod "etcd-functional-679784" is not "Ready", error: <nil>
	I1018 09:40:53.448994  315151 pod_ready.go:94] pod "etcd-functional-679784" is "Ready"
	I1018 09:40:53.449013  315151 pod_ready.go:86] duration metric: took 12.005303362s for pod "etcd-functional-679784" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:53.451738  315151 pod_ready.go:83] waiting for pod "kube-apiserver-functional-679784" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:53.956913  315151 pod_ready.go:94] pod "kube-apiserver-functional-679784" is "Ready"
	I1018 09:40:53.956928  315151 pod_ready.go:86] duration metric: took 505.177418ms for pod "kube-apiserver-functional-679784" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:53.959392  315151 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-679784" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:53.964229  315151 pod_ready.go:94] pod "kube-controller-manager-functional-679784" is "Ready"
	I1018 09:40:53.964242  315151 pod_ready.go:86] duration metric: took 4.837209ms for pod "kube-controller-manager-functional-679784" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:53.966601  315151 pod_ready.go:83] waiting for pod "kube-proxy-j4hbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:54.051930  315151 pod_ready.go:94] pod "kube-proxy-j4hbt" is "Ready"
	I1018 09:40:54.051946  315151 pod_ready.go:86] duration metric: took 85.33324ms for pod "kube-proxy-j4hbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:54.246898  315151 pod_ready.go:83] waiting for pod "kube-scheduler-functional-679784" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:54.647242  315151 pod_ready.go:94] pod "kube-scheduler-functional-679784" is "Ready"
	I1018 09:40:54.647256  315151 pod_ready.go:86] duration metric: took 400.328839ms for pod "kube-scheduler-functional-679784" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:40:54.647266  315151 pod_ready.go:40] duration metric: took 13.215586716s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:40:54.701266  315151 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:40:54.704385  315151 out.go:179] * Done! kubectl is now configured to use "functional-679784" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:41:31 functional-679784 crio[3549]: time="2025-10-18T09:41:31.595132285Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-btn7p Namespace:default ID:416a437c2e54d80b61cda4f63c2f1b5fbaa1604a1393d455f93b22845c29ab5f UID:eb1a74b4-6ba1-4e66-8da1-fc67da81e371 NetNS:/var/run/netns/fc435d6a-ac95-4d41-b9d5-0d30e6baae97 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40006e8e78}] Aliases:map[]}"
	Oct 18 09:41:31 functional-679784 crio[3549]: time="2025-10-18T09:41:31.595283322Z" level=info msg="Checking pod default_hello-node-75c85bcc94-btn7p for CNI network kindnet (type=ptp)"
	Oct 18 09:41:31 functional-679784 crio[3549]: time="2025-10-18T09:41:31.598735032Z" level=info msg="Ran pod sandbox 416a437c2e54d80b61cda4f63c2f1b5fbaa1604a1393d455f93b22845c29ab5f with infra container: default/hello-node-75c85bcc94-btn7p/POD" id=e8fcfefa-bf30-4927-8d6d-e18cbfced5a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:41:31 functional-679784 crio[3549]: time="2025-10-18T09:41:31.6009446Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c0dc8438-468e-4236-a8e9-46cb086f7f3b name=/runtime.v1.ImageService/PullImage
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.552172349Z" level=info msg="Stopping pod sandbox: fe6bef3045c690773478117735cccd66f1cb4b381eb0684d4a75e3073aa65cec" id=47aef860-be66-4b70-a5c1-120ebc28f01d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.552240208Z" level=info msg="Stopped pod sandbox (already stopped): fe6bef3045c690773478117735cccd66f1cb4b381eb0684d4a75e3073aa65cec" id=47aef860-be66-4b70-a5c1-120ebc28f01d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.553059387Z" level=info msg="Removing pod sandbox: fe6bef3045c690773478117735cccd66f1cb4b381eb0684d4a75e3073aa65cec" id=29a10947-9303-4a14-980f-c9fd2d67df08 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.556656283Z" level=info msg="Removed pod sandbox: fe6bef3045c690773478117735cccd66f1cb4b381eb0684d4a75e3073aa65cec" id=29a10947-9303-4a14-980f-c9fd2d67df08 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.55745791Z" level=info msg="Stopping pod sandbox: f5593b797f7164db656cc40feec4ed90e669bd54099e9eeffa52c5cdb20bea53" id=c932034d-a9b6-49d9-aae9-5bdad031203b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.557508365Z" level=info msg="Stopped pod sandbox (already stopped): f5593b797f7164db656cc40feec4ed90e669bd54099e9eeffa52c5cdb20bea53" id=c932034d-a9b6-49d9-aae9-5bdad031203b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.557848586Z" level=info msg="Removing pod sandbox: f5593b797f7164db656cc40feec4ed90e669bd54099e9eeffa52c5cdb20bea53" id=a8dc24e1-bd3d-4306-8c75-bdca1cebb8e5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.562107544Z" level=info msg="Removed pod sandbox: f5593b797f7164db656cc40feec4ed90e669bd54099e9eeffa52c5cdb20bea53" id=a8dc24e1-bd3d-4306-8c75-bdca1cebb8e5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.562627045Z" level=info msg="Stopping pod sandbox: ac69da7ff035fb9f4b505a059f1c9f2518891f406676a2be4b5d7664605a4181" id=b9f6b37c-c54d-48eb-a70c-552c699b3bc4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.562672888Z" level=info msg="Stopped pod sandbox (already stopped): ac69da7ff035fb9f4b505a059f1c9f2518891f406676a2be4b5d7664605a4181" id=b9f6b37c-c54d-48eb-a70c-552c699b3bc4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.562985375Z" level=info msg="Removing pod sandbox: ac69da7ff035fb9f4b505a059f1c9f2518891f406676a2be4b5d7664605a4181" id=9c9e61ec-e4be-4549-accd-880aad08cfb5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:41:33 functional-679784 crio[3549]: time="2025-10-18T09:41:33.566478809Z" level=info msg="Removed pod sandbox: ac69da7ff035fb9f4b505a059f1c9f2518891f406676a2be4b5d7664605a4181" id=9c9e61ec-e4be-4549-accd-880aad08cfb5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:41:42 functional-679784 crio[3549]: time="2025-10-18T09:41:42.508338913Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=99920239-2ba5-4026-9bbf-b040fc8a8d1d name=/runtime.v1.ImageService/PullImage
	Oct 18 09:41:53 functional-679784 crio[3549]: time="2025-10-18T09:41:53.508319144Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3440f84f-6d58-431c-adab-a7f6c1c81b66 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:42:09 functional-679784 crio[3549]: time="2025-10-18T09:42:09.508572427Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e3f1bc15-c3cc-48de-8a8a-a072296ac6f3 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:42:43 functional-679784 crio[3549]: time="2025-10-18T09:42:43.509630905Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fd7911fd-2f64-44a6-b649-be4e0c70ad89 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:42:52 functional-679784 crio[3549]: time="2025-10-18T09:42:52.50780218Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=24a0fa4d-7147-40d8-b397-f4a27ddad9db name=/runtime.v1.ImageService/PullImage
	Oct 18 09:44:05 functional-679784 crio[3549]: time="2025-10-18T09:44:05.50888641Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=efe324c4-16a5-4448-863d-a15af2756353 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:44:16 functional-679784 crio[3549]: time="2025-10-18T09:44:16.508130521Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d0b897ba-81bc-498e-80a0-9053a61915c0 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:46:49 functional-679784 crio[3549]: time="2025-10-18T09:46:49.508052593Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f301a832-d0e3-4008-9aa8-4d26a657bcd0 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:47:00 functional-679784 crio[3549]: time="2025-10-18T09:47:00.508948244Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=06adb11c-4348-4ca0-90fa-9e3a52f45daf name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3d35511dcbbd1       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   68d08109ca7c2       sp-pod                                      default
	e7bcead6e5267       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   e89d31d536d4f       nginx-svc                                   default
	67b3a0aa79791       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   04f71a7f9024e       kube-proxy-j4hbt                            kube-system
	5f1a217846b91       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   a8c75baa0ae7c       kindnet-sw9kp                               kube-system
	bbafe73db0932       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       4                   a7e9de5be1ec7       storage-provisioner                         kube-system
	1258690a45a1d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   708296c667aa5       kube-apiserver-functional-679784            kube-system
	0969cc82b664b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   f7a89e8e052bd       etcd-functional-679784                      kube-system
	daf000c78b846       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   beb355d5b6ebf       kube-scheduler-functional-679784            kube-system
	49293b30a18af       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   16d465e91192c       kube-controller-manager-functional-679784   kube-system
	fc026e70f7456       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       3                   a7e9de5be1ec7       storage-provisioner                         kube-system
	4598b7bdd2bb4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   8e7d0e82034e4       coredns-66bc5c9577-g9mpj                    kube-system
	36417b8a88584       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Exited              kindnet-cni               2                   a8c75baa0ae7c       kindnet-sw9kp                               kube-system
	dcfa0001b0cad       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Exited              etcd                      2                   f7a89e8e052bd       etcd-functional-679784                      kube-system
	65f1b47baccba       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Exited              kube-proxy                2                   04f71a7f9024e       kube-proxy-j4hbt                            kube-system
	035b678db6aba       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Exited              kube-scheduler            2                   beb355d5b6ebf       kube-scheduler-functional-679784            kube-system
	23a62ae1afbda       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Exited              kube-controller-manager   2                   16d465e91192c       kube-controller-manager-functional-679784   kube-system
	2aa4bbf41d7e7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   8e7d0e82034e4       coredns-66bc5c9577-g9mpj                    kube-system
	
	
	==> coredns [2aa4bbf41d7e7f531d7b76f617d5fb03264597556b0acc6596fde10e45b96d0b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37124 - 39479 "HINFO IN 406985267598103159.4938711355269882544. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.079462298s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4598b7bdd2bb4725827f46b26fa7cfabac503275fb82ef81963dc81c53196fc5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33544 - 13011 "HINFO IN 468076982876695606.4619444656886338346. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021551386s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:39096->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:39108->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.2:39106->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-679784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-679784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=functional-679784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_38_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:38:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-679784
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:51:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:50:20 +0000   Sat, 18 Oct 2025 09:38:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:50:20 +0000   Sat, 18 Oct 2025 09:38:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:50:20 +0000   Sat, 18 Oct 2025 09:38:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:50:20 +0000   Sat, 18 Oct 2025 09:39:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-679784
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                bf2b4757-cea6-42f7-b249-1a6419e73b8d
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-btn7p                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  default                     hello-node-connect-7d85dfc575-qskk4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-g9mpj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-679784                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-sw9kp                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-679784             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-679784    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-j4hbt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-679784             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-679784 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-679784 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-679784 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-679784 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-679784 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-679784 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node functional-679784 event: Registered Node functional-679784 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-679784 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-679784 event: Registered Node functional-679784 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-679784 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-679784 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-679784 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-679784 event: Registered Node functional-679784 in Controller
	
	
	==> dmesg <==
	[Oct18 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015604] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504512] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034321] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.754127] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.006986] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 08:37] hrtimer: interrupt took 52245394 ns
	[Oct18 08:40] FS-Cache: Duplicate cookie detected
	[  +0.000820] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=0000000012c02099{9P.session} n=0000000039d56c98
	[  +0.001191] FS-Cache: O-key=[10] '34323935323339393835'
	[  +0.000847] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001040] FS-Cache: N-cookie d=0000000012c02099{9P.session} n=00000000aa671ad4
	[  +0.001145] FS-Cache: N-key=[10] '34323935323339393835'
	[Oct18 09:29] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[  +0.081210] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct18 09:37] overlayfs: idmapped layers are currently not supported
	[Oct18 09:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0969cc82b664ba19f694caafb7066701fbb8b99c0fc5a5a0d94235c42ad29961] <==
	{"level":"warn","ts":"2025-10-18T09:40:36.822066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:36.833047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:36.858741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:36.874173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:36.891382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:36.914696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:36.951973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:36.979372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.008747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.016609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.034161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.054304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.077998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.094029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.112436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.174494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.192812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.216201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.236405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.259219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.270210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:40:37.346281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37662","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:50:35.710327Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1135}
	{"level":"info","ts":"2025-10-18T09:50:35.733952Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1135,"took":"23.263783ms","hash":2810199861,"current-db-size-bytes":3366912,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1466368,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-18T09:50:35.734024Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2810199861,"revision":1135,"compact-revision":-1}
	
	
	==> etcd [dcfa0001b0cadb86c2e79b84c335a5c92c2d7b886570a069f1e5198d03021f3e] <==
	{"level":"info","ts":"2025-10-18T09:40:16.954831Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-10-18T09:40:16.956078Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-18T09:40:16.956158Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-18T09:40:16.953549Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:40:16.972996Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T09:40:16.973907Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-10-18T09:40:17.067209Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T09:40:18.246926Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T09:40:18.246966Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-679784","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T09:40:18.247102Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:40:18.249312Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-18T09:40:18.253417Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:40:18.253511Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:40:18.253552Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T09:40:18.253651Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:40:18.253713Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:40:18.253745Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-18T09:40:18.253302Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:40:18.253824Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-18T09:40:18.253895Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T09:40:18.253947Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T09:40:18.266077Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T09:40:18.266238Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:40:18.266466Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T09:40:18.266518Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-679784","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:51:14 up  1:33,  0 user,  load average: 0.42, 0.35, 1.18
	Linux functional-679784 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36417b8a88584c6095705f113f34e033a95805adc55e4a6e74366417a77d1aac] <==
	I1018 09:40:16.760933       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:40:16.761171       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 09:40:16.809366       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:40:16.809393       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:40:16.809409       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:40:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:40:16.976187       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:40:16.976283       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:40:16.976317       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:40:17.010022       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kindnet [5f1a217846b91f91c36dc42a1ae5b1a84d21f73139498a7d68c5a6611d342b9d] <==
	I1018 09:49:09.166006       1 main.go:301] handling current node
	I1018 09:49:19.172012       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:49:19.172134       1 main.go:301] handling current node
	I1018 09:49:29.163372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:49:29.163407       1 main.go:301] handling current node
	I1018 09:49:39.164067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:49:39.164145       1 main.go:301] handling current node
	I1018 09:49:49.164805       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:49:49.164849       1 main.go:301] handling current node
	I1018 09:49:59.167818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:49:59.167854       1 main.go:301] handling current node
	I1018 09:50:09.171755       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:50:09.171790       1 main.go:301] handling current node
	I1018 09:50:19.170120       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:50:19.170156       1 main.go:301] handling current node
	I1018 09:50:29.163844       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:50:29.163883       1 main.go:301] handling current node
	I1018 09:50:39.165348       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:50:39.165503       1 main.go:301] handling current node
	I1018 09:50:49.171072       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:50:49.171106       1 main.go:301] handling current node
	I1018 09:50:59.163492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:50:59.163524       1 main.go:301] handling current node
	I1018 09:51:09.172310       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:51:09.172347       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1258690a45a1da27fd03280b88434d7944296e0aa1f228290ed87c1a5c7a8c2e] <==
	I1018 09:40:38.326950       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:40:38.335034       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:40:38.343011       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:40:38.343635       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:40:38.343698       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:40:38.345426       1 cache.go:39] Caches are synced for autoregister controller
	E1018 09:40:38.358150       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:40:38.371336       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:40:38.563648       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:40:39.015303       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:40:40.065417       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:40:40.220641       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:40:40.322514       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:40:40.330918       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:40:46.050224       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:40:46.052235       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:40:46.054432       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:40:58.022596       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.233.248"}
	I1018 09:41:04.061654       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.63.238"}
	I1018 09:41:12.716042       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.193.18"}
	E1018 09:41:22.682344       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:52902: use of closed network connection
	E1018 09:41:23.801451       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1018 09:41:31.131066       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:33722: use of closed network connection
	I1018 09:41:31.343802       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.98.95"}
	I1018 09:50:38.244578       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [23a62ae1afbda178c3417a2f8797b5cb8983fe2f5de533f336585a2ba7c779c2] <==
	I1018 09:40:18.440395       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:40:19.857365       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 09:40:19.857478       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:40:19.859449       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 09:40:19.859632       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 09:40:19.859855       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 09:40:19.859887       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [49293b30a18af628207312eae291bcdffb64e2434d0584ac7995d31e92834e06] <==
	I1018 09:40:41.604512       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:40:41.605276       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:40:41.605296       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:40:41.605304       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:40:41.609072       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:40:41.609126       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:40:41.611642       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:40:41.611985       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:40:41.612059       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:40:41.616063       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:40:41.616206       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:40:41.620276       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:40:41.621468       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:40:41.627610       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:40:41.630310       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:40:41.631447       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:40:41.635764       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:40:41.638896       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:40:41.639787       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:40:41.639831       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:40:41.642020       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:40:41.642080       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:40:41.642097       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:40:41.642121       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:40:41.642146       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	
	
	==> kube-proxy [65f1b47baccba3335a979907c932621d8fa52381f95e598287aaf2fa0d46fc9e] <==
	
	
	==> kube-proxy [67b3a0aa7979141d53bf1f1f82a6f7ed480a5d7dc2629e3cb702b12fc712f5bd] <==
	I1018 09:40:38.990259       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:40:39.303981       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:40:39.487167       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:40:39.487274       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 09:40:39.487402       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:40:39.516031       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:40:39.516087       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:40:39.520081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:40:39.520374       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:40:39.520394       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:40:39.521804       1 config.go:200] "Starting service config controller"
	I1018 09:40:39.521823       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:40:39.521848       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:40:39.521854       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:40:39.521881       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:40:39.521891       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:40:39.522892       1 config.go:309] "Starting node config controller"
	I1018 09:40:39.522966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:40:39.523013       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:40:39.622543       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:40:39.622576       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:40:39.622625       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [035b678db6aba9a8001a402b09bdfdb5f3e16d6d4686ab11726ccaf002addf46] <==
	I1018 09:40:19.252437       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:40:29.485073       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:42434->192.168.49.2:8441: read: connection reset by peer
	W1018 09:40:29.485106       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:40:29.485125       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:40:29.494487       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:40:29.494518       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1018 09:40:29.494539       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1018 09:40:29.497866       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:40:29.497964       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:40:29.498300       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1018 09:40:29.498420       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1018 09:40:29.498534       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:40:29.498575       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:40:29.498884       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 09:40:29.498913       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 09:40:29.498936       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 09:40:29.498955       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [daf000c78b84647d4ccde1a1954a7d39e47c78cfb3c2d592b6ae5fbb1600f8aa] <==
	I1018 09:40:37.572997       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:40:39.435005       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:40:39.435112       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:40:39.441843       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:40:39.442032       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:40:39.442086       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:40:39.442160       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:40:39.442975       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:40:39.443052       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:40:39.443578       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:40:39.446468       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:40:39.542578       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:40:39.543908       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:40:39.546719       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:48:30 functional-679784 kubelet[4050]: E1018 09:48:30.507736    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:48:39 functional-679784 kubelet[4050]: E1018 09:48:39.507600    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:48:41 functional-679784 kubelet[4050]: E1018 09:48:41.507255    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:48:52 functional-679784 kubelet[4050]: E1018 09:48:52.507481    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:48:53 functional-679784 kubelet[4050]: E1018 09:48:53.508422    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:49:04 functional-679784 kubelet[4050]: E1018 09:49:04.507554    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:49:08 functional-679784 kubelet[4050]: E1018 09:49:08.507573    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:49:19 functional-679784 kubelet[4050]: E1018 09:49:19.508249    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:49:22 functional-679784 kubelet[4050]: E1018 09:49:22.508029    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:49:30 functional-679784 kubelet[4050]: E1018 09:49:30.507978    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:49:36 functional-679784 kubelet[4050]: E1018 09:49:36.507311    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:49:45 functional-679784 kubelet[4050]: E1018 09:49:45.508374    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:49:50 functional-679784 kubelet[4050]: E1018 09:49:50.507619    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:49:59 functional-679784 kubelet[4050]: E1018 09:49:59.507978    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:50:01 functional-679784 kubelet[4050]: E1018 09:50:01.508816    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:50:12 functional-679784 kubelet[4050]: E1018 09:50:12.507464    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:50:13 functional-679784 kubelet[4050]: E1018 09:50:13.507785    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:50:24 functional-679784 kubelet[4050]: E1018 09:50:24.507513    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:50:28 functional-679784 kubelet[4050]: E1018 09:50:28.508168    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:50:37 functional-679784 kubelet[4050]: E1018 09:50:37.508069    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:50:40 functional-679784 kubelet[4050]: E1018 09:50:40.507478    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:50:50 functional-679784 kubelet[4050]: E1018 09:50:50.507408    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:50:51 functional-679784 kubelet[4050]: E1018 09:50:51.507565    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	Oct 18 09:51:04 functional-679784 kubelet[4050]: E1018 09:51:04.507901    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-qskk4" podUID="5f50a6ef-a568-4552-80f9-41f0546d6341"
	Oct 18 09:51:04 functional-679784 kubelet[4050]: E1018 09:51:04.508397    4050 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-btn7p" podUID="eb1a74b4-6ba1-4e66-8da1-fc67da81e371"
	
	
	==> storage-provisioner [bbafe73db09328155db25de51d7eb1dfd14309c8a84723663ea11e7bad691515] <==
	W1018 09:50:51.037052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:50:53.041138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:50:53.045889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:50:55.052646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:50:55.060662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:50:57.063870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:50:57.068376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:50:59.072403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:50:59.077261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:01.080352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:01.084843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:03.088015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:03.092274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:05.095922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:05.102771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:07.105575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:07.109911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:09.113262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:09.123434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:11.126914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:11.131938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:13.135611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:13.144656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:15.148393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:51:15.156501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fc026e70f7456e71e934cade8d21183f61cd198339d93e71daf431e62177c394] <==
	I1018 09:40:28.847019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:40:28.848614       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-679784 -n functional-679784
helpers_test.go:269: (dbg) Run:  kubectl --context functional-679784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-btn7p hello-node-connect-7d85dfc575-qskk4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-679784 describe pod hello-node-75c85bcc94-btn7p hello-node-connect-7d85dfc575-qskk4
helpers_test.go:290: (dbg) kubectl --context functional-679784 describe pod hello-node-75c85bcc94-btn7p hello-node-connect-7d85dfc575-qskk4:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-btn7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-679784/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 09:41:31 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7tcmg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7tcmg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m44s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-btn7p to functional-679784
	  Normal   Pulling    6m59s (x5 over 9m44s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m59s (x5 over 9m44s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m59s (x5 over 9m44s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m40s (x21 over 9m44s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m40s (x21 over 9m44s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-qskk4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-679784/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 09:41:12 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8phv5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8phv5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-qskk4 to functional-679784
	  Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m53s (x22 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m53s (x22 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-679784 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-679784 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-btn7p" [eb1a74b4-6ba1-4e66-8da1-fc67da81e371] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1018 09:43:26.203317  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:43:53.916592  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:26.203238  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-679784 -n functional-679784
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 09:51:31.822572781 +0000 UTC m=+1275.408233051
functional_test.go:1460: (dbg) Run:  kubectl --context functional-679784 describe po hello-node-75c85bcc94-btn7p -n default
functional_test.go:1460: (dbg) kubectl --context functional-679784 describe po hello-node-75c85bcc94-btn7p -n default:
Name:             hello-node-75c85bcc94-btn7p
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-679784/192.168.49.2
Start Time:       Sat, 18 Oct 2025 09:41:31 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7tcmg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7tcmg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-btn7p to functional-679784
Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-679784 logs hello-node-75c85bcc94-btn7p -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-679784 logs hello-node-75c85bcc94-btn7p -n default: exit status 1 (127.294065ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-btn7p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-679784 logs hello-node-75c85bcc94-btn7p -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 service --namespace=default --https --url hello-node: exit status 115 (484.456244ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30457
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-679784 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 service hello-node --url --format={{.IP}}: exit status 115 (498.680545ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-679784 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 service hello-node --url: exit status 115 (486.957048ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30457
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-679784 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30457
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image load --daemon kicbase/echo-server:functional-679784 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-679784" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image load --daemon kicbase/echo-server:functional-679784 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-679784" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-679784
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image load --daemon kicbase/echo-server:functional-679784 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-679784" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image save kicbase/echo-server:functional-679784 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1018 09:51:45.819945  323784 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:51:45.820100  323784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:51:45.820110  323784 out.go:374] Setting ErrFile to fd 2...
	I1018 09:51:45.820115  323784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:51:45.820382  323784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:51:45.820985  323784 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:51:45.821107  323784 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:51:45.821599  323784 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
	I1018 09:51:45.858503  323784 ssh_runner.go:195] Run: systemctl --version
	I1018 09:51:45.858567  323784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
	I1018 09:51:45.885398  323784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
	I1018 09:51:45.991852  323784 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1018 09:51:45.991903  323784 cache_images.go:254] Failed to load cached images for "functional-679784": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1018 09:51:45.991925  323784 cache_images.go:266] failed pushing to: functional-679784

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-679784
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image save --daemon kicbase/echo-server:functional-679784 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-679784
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-679784: exit status 1 (17.23497ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-679784

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-679784

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-604405 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-604405 --output=json --user=testUser: exit status 80 (1.733414233s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d8fd51f4-b3c0-4218-ab31-6780c8984c82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-604405 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"51953702-f7f3-4c6e-8a5e-11d1271e4b9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T10:06:26Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"c6e8095f-4b60-471b-82ec-be918360ac45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-604405 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.73s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-604405 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-604405 --output=json --user=testUser: exit status 80 (2.221949366s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e61b05bf-2ac1-451a-96e5-22fcdfed9cae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-604405 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6f3ec5d0-f93c-4423-8b3d-3801089c54b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T10:06:29Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"73d11d61-abe4-480f-99d1-ac2662eede14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-604405 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.22s)

                                                
                                    
x
+
TestScheduledStopUnix (40.32s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-023595 --memory=3072 --driver=docker  --container-runtime=crio
E1018 10:21:03.597327  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-023595 --memory=3072 --driver=docker  --container-runtime=crio: (35.254864994s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-023595 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-023595 -n scheduled-stop-023595
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-023595 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 422905 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-10-18 10:21:11.588994331 +0000 UTC m=+3055.174654543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-023595
helpers_test.go:243: (dbg) docker inspect scheduled-stop-023595:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9ed16db5a130d091d4e869e03d2d9678d7d57349f6cb860762c3eb13a5a02d4",
	        "Created": "2025-10-18T10:20:41.277558843Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 421118,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:20:41.347233843Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/c9ed16db5a130d091d4e869e03d2d9678d7d57349f6cb860762c3eb13a5a02d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9ed16db5a130d091d4e869e03d2d9678d7d57349f6cb860762c3eb13a5a02d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9ed16db5a130d091d4e869e03d2d9678d7d57349f6cb860762c3eb13a5a02d4/hosts",
	        "LogPath": "/var/lib/docker/containers/c9ed16db5a130d091d4e869e03d2d9678d7d57349f6cb860762c3eb13a5a02d4/c9ed16db5a130d091d4e869e03d2d9678d7d57349f6cb860762c3eb13a5a02d4-json.log",
	        "Name": "/scheduled-stop-023595",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-023595:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-023595",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9ed16db5a130d091d4e869e03d2d9678d7d57349f6cb860762c3eb13a5a02d4",
	                "LowerDir": "/var/lib/docker/overlay2/b7ca0ed9dbe45695d6b470a04e54f2106e4666d59b9604a364faa86d1779510b-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7ca0ed9dbe45695d6b470a04e54f2106e4666d59b9604a364faa86d1779510b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7ca0ed9dbe45695d6b470a04e54f2106e4666d59b9604a364faa86d1779510b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7ca0ed9dbe45695d6b470a04e54f2106e4666d59b9604a364faa86d1779510b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-023595",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-023595/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-023595",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-023595",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-023595",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0d15790be39176a86350bdcca55aafc41c8b1e4a4562cd72c85d9c392a92e742",
	            "SandboxKey": "/var/run/docker/netns/0d15790be391",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33333"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33334"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33335"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-023595": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:87:e2:f7:9d:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "536070fe526d5eb6389f3524a95a7e0e6ff91dbf562cc76b660deedee8a3c15d",
	                    "EndpointID": "169980c894f1c62c92f5578a272ed53288bb4348d3c613ac62fdc2a655871446",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-023595",
	                        "c9ed16db5a13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-023595 -n scheduled-stop-023595
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-023595 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-023595 logs -n 25: (1.072453696s)
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-710351                                                                                                                                       │ multinode-710351      │ jenkins │ v1.37.0 │ 18 Oct 25 10:15 UTC │ 18 Oct 25 10:15 UTC │
	│ start   │ -p multinode-710351 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-710351      │ jenkins │ v1.37.0 │ 18 Oct 25 10:15 UTC │ 18 Oct 25 10:16 UTC │
	│ node    │ list -p multinode-710351                                                                                                                                  │ multinode-710351      │ jenkins │ v1.37.0 │ 18 Oct 25 10:16 UTC │                     │
	│ node    │ multinode-710351 node delete m03                                                                                                                          │ multinode-710351      │ jenkins │ v1.37.0 │ 18 Oct 25 10:16 UTC │ 18 Oct 25 10:16 UTC │
	│ stop    │ multinode-710351 stop                                                                                                                                     │ multinode-710351      │ jenkins │ v1.37.0 │ 18 Oct 25 10:16 UTC │ 18 Oct 25 10:16 UTC │
	│ start   │ -p multinode-710351 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-710351      │ jenkins │ v1.37.0 │ 18 Oct 25 10:16 UTC │ 18 Oct 25 10:17 UTC │
	│ node    │ list -p multinode-710351                                                                                                                                  │ multinode-710351      │ jenkins │ v1.37.0 │ 18 Oct 25 10:17 UTC │                     │
	│ start   │ -p multinode-710351-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-710351-m02  │ jenkins │ v1.37.0 │ 18 Oct 25 10:17 UTC │                     │
	│ start   │ -p multinode-710351-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-710351-m03  │ jenkins │ v1.37.0 │ 18 Oct 25 10:17 UTC │ 18 Oct 25 10:18 UTC │
	│ node    │ add -p multinode-710351                                                                                                                                   │ multinode-710351      │ jenkins │ v1.37.0 │ 18 Oct 25 10:18 UTC │                     │
	│ delete  │ -p multinode-710351-m03                                                                                                                                   │ multinode-710351-m03  │ jenkins │ v1.37.0 │ 18 Oct 25 10:18 UTC │ 18 Oct 25 10:18 UTC │
	│ delete  │ -p multinode-710351                                                                                                                                       │ multinode-710351      │ jenkins │ v1.37.0 │ 18 Oct 25 10:18 UTC │ 18 Oct 25 10:18 UTC │
	│ start   │ -p test-preload-968310 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-968310   │ jenkins │ v1.37.0 │ 18 Oct 25 10:18 UTC │ 18 Oct 25 10:19 UTC │
	│ image   │ test-preload-968310 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-968310   │ jenkins │ v1.37.0 │ 18 Oct 25 10:19 UTC │ 18 Oct 25 10:19 UTC │
	│ stop    │ -p test-preload-968310                                                                                                                                    │ test-preload-968310   │ jenkins │ v1.37.0 │ 18 Oct 25 10:19 UTC │ 18 Oct 25 10:19 UTC │
	│ start   │ -p test-preload-968310 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-968310   │ jenkins │ v1.37.0 │ 18 Oct 25 10:19 UTC │ 18 Oct 25 10:20 UTC │
	│ image   │ test-preload-968310 image list                                                                                                                            │ test-preload-968310   │ jenkins │ v1.37.0 │ 18 Oct 25 10:20 UTC │ 18 Oct 25 10:20 UTC │
	│ delete  │ -p test-preload-968310                                                                                                                                    │ test-preload-968310   │ jenkins │ v1.37.0 │ 18 Oct 25 10:20 UTC │ 18 Oct 25 10:20 UTC │
	│ start   │ -p scheduled-stop-023595 --memory=3072 --driver=docker  --container-runtime=crio                                                                          │ scheduled-stop-023595 │ jenkins │ v1.37.0 │ 18 Oct 25 10:20 UTC │ 18 Oct 25 10:21 UTC │
	│ stop    │ -p scheduled-stop-023595 --schedule 5m                                                                                                                    │ scheduled-stop-023595 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 5m                                                                                                                    │ scheduled-stop-023595 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 5m                                                                                                                    │ scheduled-stop-023595 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 15s                                                                                                                   │ scheduled-stop-023595 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 15s                                                                                                                   │ scheduled-stop-023595 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 15s                                                                                                                   │ scheduled-stop-023595 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:20:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:20:35.859636  420727 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:20:35.859767  420727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:20:35.859772  420727 out.go:374] Setting ErrFile to fd 2...
	I1018 10:20:35.859774  420727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:20:35.860037  420727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:20:35.860425  420727 out.go:368] Setting JSON to false
	I1018 10:20:35.861277  420727 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7386,"bootTime":1760775450,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:20:35.861335  420727 start.go:141] virtualization:  
	I1018 10:20:35.865346  420727 out.go:179] * [scheduled-stop-023595] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:20:35.870230  420727 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:20:35.870340  420727 notify.go:220] Checking for updates...
	I1018 10:20:35.877152  420727 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:20:35.880553  420727 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:20:35.883969  420727 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:20:35.887270  420727 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:20:35.890352  420727 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:20:35.893636  420727 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:20:35.925932  420727 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:20:35.926058  420727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:20:35.984542  420727 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 10:20:35.975614104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:20:35.984662  420727 docker.go:318] overlay module found
	I1018 10:20:35.988044  420727 out.go:179] * Using the docker driver based on user configuration
	I1018 10:20:35.991084  420727 start.go:305] selected driver: docker
	I1018 10:20:35.991101  420727 start.go:925] validating driver "docker" against <nil>
	I1018 10:20:35.991112  420727 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:20:35.991821  420727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:20:36.049370  420727 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 10:20:36.040024038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:20:36.049534  420727 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 10:20:36.049779  420727 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 10:20:36.052787  420727 out.go:179] * Using Docker driver with root privileges
	I1018 10:20:36.055754  420727 cni.go:84] Creating CNI manager for ""
	I1018 10:20:36.055818  420727 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:20:36.055828  420727 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 10:20:36.055902  420727 start.go:349] cluster config:
	{Name:scheduled-stop-023595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-023595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:20:36.060929  420727 out.go:179] * Starting "scheduled-stop-023595" primary control-plane node in "scheduled-stop-023595" cluster
	I1018 10:20:36.063924  420727 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:20:36.066764  420727 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:20:36.069701  420727 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:20:36.069761  420727 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:20:36.069770  420727 cache.go:58] Caching tarball of preloaded images
	I1018 10:20:36.069787  420727 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:20:36.069855  420727 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:20:36.069864  420727 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:20:36.070205  420727 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/config.json ...
	I1018 10:20:36.070225  420727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/config.json: {Name:mk9947edbb00d34426245e05b9cee254802fec50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:20:36.089164  420727 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:20:36.089177  420727 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:20:36.089218  420727 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:20:36.089252  420727 start.go:360] acquireMachinesLock for scheduled-stop-023595: {Name:mk4630b5efcea9e640aafc24fbb601ad7b4a0769 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:20:36.089373  420727 start.go:364] duration metric: took 105.746µs to acquireMachinesLock for "scheduled-stop-023595"
	I1018 10:20:36.089399  420727 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-023595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-023595 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:20:36.089465  420727 start.go:125] createHost starting for "" (driver="docker")
	I1018 10:20:36.093027  420727 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:20:36.093320  420727 start.go:159] libmachine.API.Create for "scheduled-stop-023595" (driver="docker")
	I1018 10:20:36.093379  420727 client.go:168] LocalClient.Create starting
	I1018 10:20:36.093484  420727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:20:36.093519  420727 main.go:141] libmachine: Decoding PEM data...
	I1018 10:20:36.093534  420727 main.go:141] libmachine: Parsing certificate...
	I1018 10:20:36.093588  420727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:20:36.093603  420727 main.go:141] libmachine: Decoding PEM data...
	I1018 10:20:36.093612  420727 main.go:141] libmachine: Parsing certificate...
	I1018 10:20:36.093991  420727 cli_runner.go:164] Run: docker network inspect scheduled-stop-023595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:20:36.110634  420727 cli_runner.go:211] docker network inspect scheduled-stop-023595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:20:36.110708  420727 network_create.go:284] running [docker network inspect scheduled-stop-023595] to gather additional debugging logs...
	I1018 10:20:36.110723  420727 cli_runner.go:164] Run: docker network inspect scheduled-stop-023595
	W1018 10:20:36.126693  420727 cli_runner.go:211] docker network inspect scheduled-stop-023595 returned with exit code 1
	I1018 10:20:36.126713  420727 network_create.go:287] error running [docker network inspect scheduled-stop-023595]: docker network inspect scheduled-stop-023595: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-023595 not found
	I1018 10:20:36.126726  420727 network_create.go:289] output of [docker network inspect scheduled-stop-023595]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-023595 not found
	
	** /stderr **
	I1018 10:20:36.126832  420727 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:20:36.144129  420727 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:20:36.144354  420727 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:20:36.144670  420727 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:20:36.144998  420727 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bf180}
	I1018 10:20:36.145014  420727 network_create.go:124] attempt to create docker network scheduled-stop-023595 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 10:20:36.145106  420727 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-023595 scheduled-stop-023595
	I1018 10:20:36.204409  420727 network_create.go:108] docker network scheduled-stop-023595 192.168.76.0/24 created
	I1018 10:20:36.204431  420727 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-023595" container
	I1018 10:20:36.204503  420727 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:20:36.220148  420727 cli_runner.go:164] Run: docker volume create scheduled-stop-023595 --label name.minikube.sigs.k8s.io=scheduled-stop-023595 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:20:36.237532  420727 oci.go:103] Successfully created a docker volume scheduled-stop-023595
	I1018 10:20:36.237609  420727 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-023595-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-023595 --entrypoint /usr/bin/test -v scheduled-stop-023595:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 10:20:36.769951  420727 oci.go:107] Successfully prepared a docker volume scheduled-stop-023595
	I1018 10:20:36.770001  420727 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:20:36.770020  420727 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 10:20:36.770095  420727 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-023595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 10:20:41.207337  420727 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-023595:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.437183757s)
	I1018 10:20:41.207359  420727 kic.go:203] duration metric: took 4.437336001s to extract preloaded images to volume ...
	W1018 10:20:41.207489  420727 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:20:41.207590  420727 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:20:41.262853  420727 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-023595 --name scheduled-stop-023595 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-023595 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-023595 --network scheduled-stop-023595 --ip 192.168.76.2 --volume scheduled-stop-023595:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:20:41.582019  420727 cli_runner.go:164] Run: docker container inspect scheduled-stop-023595 --format={{.State.Running}}
	I1018 10:20:41.607995  420727 cli_runner.go:164] Run: docker container inspect scheduled-stop-023595 --format={{.State.Status}}
	I1018 10:20:41.626483  420727 cli_runner.go:164] Run: docker exec scheduled-stop-023595 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:20:41.676267  420727 oci.go:144] the created container "scheduled-stop-023595" has a running status.
	I1018 10:20:41.676295  420727 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/scheduled-stop-023595/id_rsa...
	I1018 10:20:41.953130  420727 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/scheduled-stop-023595/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:20:41.974992  420727 cli_runner.go:164] Run: docker container inspect scheduled-stop-023595 --format={{.State.Status}}
	I1018 10:20:42.002726  420727 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:20:42.002737  420727 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-023595 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:20:42.070957  420727 cli_runner.go:164] Run: docker container inspect scheduled-stop-023595 --format={{.State.Status}}
	I1018 10:20:42.105464  420727 machine.go:93] provisionDockerMachine start ...
	I1018 10:20:42.105597  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:20:42.137701  420727 main.go:141] libmachine: Using SSH client type: native
	I1018 10:20:42.138475  420727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1018 10:20:42.138486  420727 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:20:42.139320  420727 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 10:20:45.301865  420727 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-023595
	
	I1018 10:20:45.301880  420727 ubuntu.go:182] provisioning hostname "scheduled-stop-023595"
	I1018 10:20:45.301959  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:20:45.324910  420727 main.go:141] libmachine: Using SSH client type: native
	I1018 10:20:45.325291  420727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1018 10:20:45.325301  420727 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-023595 && echo "scheduled-stop-023595" | sudo tee /etc/hostname
	I1018 10:20:45.498688  420727 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-023595
	
	I1018 10:20:45.498756  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:20:45.516820  420727 main.go:141] libmachine: Using SSH client type: native
	I1018 10:20:45.517141  420727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1018 10:20:45.517156  420727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-023595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-023595/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-023595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:20:45.665444  420727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:20:45.665461  420727 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:20:45.665490  420727 ubuntu.go:190] setting up certificates
	I1018 10:20:45.665499  420727 provision.go:84] configureAuth start
	I1018 10:20:45.665557  420727 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-023595
	I1018 10:20:45.682601  420727 provision.go:143] copyHostCerts
	I1018 10:20:45.682667  420727 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:20:45.682675  420727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:20:45.682752  420727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:20:45.682844  420727 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:20:45.682848  420727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:20:45.682873  420727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:20:45.682934  420727 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:20:45.682937  420727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:20:45.682960  420727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:20:45.683040  420727 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-023595 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-023595]
	I1018 10:20:46.379660  420727 provision.go:177] copyRemoteCerts
	I1018 10:20:46.379716  420727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:20:46.379754  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:20:46.396280  420727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/scheduled-stop-023595/id_rsa Username:docker}
	I1018 10:20:46.500870  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:20:46.517973  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1018 10:20:46.535519  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:20:46.553561  420727 provision.go:87] duration metric: took 888.049933ms to configureAuth
	I1018 10:20:46.553577  420727 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:20:46.553759  420727 config.go:182] Loaded profile config "scheduled-stop-023595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:20:46.553852  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:20:46.570330  420727 main.go:141] libmachine: Using SSH client type: native
	I1018 10:20:46.570616  420727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1018 10:20:46.570629  420727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:20:46.823974  420727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:20:46.823987  420727 machine.go:96] duration metric: took 4.71850947s to provisionDockerMachine
	I1018 10:20:46.823995  420727 client.go:171] duration metric: took 10.730611029s to LocalClient.Create
	I1018 10:20:46.824016  420727 start.go:167] duration metric: took 10.730698355s to libmachine.API.Create "scheduled-stop-023595"
	I1018 10:20:46.824022  420727 start.go:293] postStartSetup for "scheduled-stop-023595" (driver="docker")
	I1018 10:20:46.824032  420727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:20:46.824136  420727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:20:46.824174  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:20:46.841395  420727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/scheduled-stop-023595/id_rsa Username:docker}
	I1018 10:20:46.945055  420727 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:20:46.948609  420727 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:20:46.948627  420727 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:20:46.948639  420727 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:20:46.948695  420727 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:20:46.948771  420727 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:20:46.948903  420727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:20:46.956280  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:20:46.977269  420727 start.go:296] duration metric: took 153.229862ms for postStartSetup
	I1018 10:20:46.977649  420727 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-023595
	I1018 10:20:46.993989  420727 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/config.json ...
	I1018 10:20:46.994268  420727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:20:46.994308  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:20:47.010667  420727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/scheduled-stop-023595/id_rsa Username:docker}
	I1018 10:20:47.110245  420727 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:20:47.114701  420727 start.go:128] duration metric: took 11.025220475s to createHost
	I1018 10:20:47.114716  420727 start.go:83] releasing machines lock for "scheduled-stop-023595", held for 11.025336297s
	I1018 10:20:47.114786  420727 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-023595
	I1018 10:20:47.131966  420727 ssh_runner.go:195] Run: cat /version.json
	I1018 10:20:47.132009  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:20:47.132250  420727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:20:47.132310  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:20:47.160339  420727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/scheduled-stop-023595/id_rsa Username:docker}
	I1018 10:20:47.160451  420727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/scheduled-stop-023595/id_rsa Username:docker}
	I1018 10:20:47.352048  420727 ssh_runner.go:195] Run: systemctl --version
	I1018 10:20:47.358568  420727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:20:47.394535  420727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:20:47.398846  420727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:20:47.398905  420727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:20:47.426046  420727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:20:47.426058  420727 start.go:495] detecting cgroup driver to use...
	I1018 10:20:47.426088  420727 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:20:47.426134  420727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:20:47.441989  420727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:20:47.455141  420727 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:20:47.455196  420727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:20:47.472549  420727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:20:47.491154  420727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:20:47.609108  420727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:20:47.724724  420727 docker.go:234] disabling docker service ...
	I1018 10:20:47.724781  420727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:20:47.746232  420727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:20:47.760729  420727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:20:47.872926  420727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:20:47.986179  420727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:20:47.999767  420727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:20:48.013685  420727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:20:48.013741  420727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:20:48.026291  420727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:20:48.026361  420727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:20:48.036663  420727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:20:48.046569  420727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:20:48.056072  420727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:20:48.065069  420727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:20:48.074238  420727 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:20:48.088642  420727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:20:48.098834  420727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:20:48.107080  420727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:20:48.114781  420727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:20:48.231582  420727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:20:48.362508  420727 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:20:48.362568  420727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:20:48.367149  420727 start.go:563] Will wait 60s for crictl version
	I1018 10:20:48.367211  420727 ssh_runner.go:195] Run: which crictl
	I1018 10:20:48.370985  420727 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:20:48.399447  420727 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:20:48.399538  420727 ssh_runner.go:195] Run: crio --version
	I1018 10:20:48.427467  420727 ssh_runner.go:195] Run: crio --version
	I1018 10:20:48.461515  420727 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:20:48.464394  420727 cli_runner.go:164] Run: docker network inspect scheduled-stop-023595 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:20:48.481570  420727 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:20:48.485825  420727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:20:48.496298  420727 kubeadm.go:883] updating cluster {Name:scheduled-stop-023595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-023595 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:20:48.496415  420727 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:20:48.496481  420727 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:20:48.528991  420727 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:20:48.529002  420727 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:20:48.529059  420727 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:20:48.556269  420727 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:20:48.556281  420727 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:20:48.556288  420727 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 10:20:48.556372  420727 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=scheduled-stop-023595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-023595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:20:48.556454  420727 ssh_runner.go:195] Run: crio config
	I1018 10:20:48.632104  420727 cni.go:84] Creating CNI manager for ""
	I1018 10:20:48.632115  420727 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:20:48.632126  420727 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:20:48.632152  420727 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-023595 NodeName:scheduled-stop-023595 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:20:48.632285  420727 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-023595"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:20:48.632433  420727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:20:48.640281  420727 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:20:48.640346  420727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:20:48.648177  420727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1018 10:20:48.661032  420727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:20:48.674634  420727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1018 10:20:48.688380  420727 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:20:48.691953  420727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:20:48.701707  420727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:20:48.824564  420727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:20:48.841812  420727 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595 for IP: 192.168.76.2
	I1018 10:20:48.841823  420727 certs.go:195] generating shared ca certs ...
	I1018 10:20:48.841838  420727 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:20:48.841981  420727 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:20:48.842027  420727 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:20:48.842033  420727 certs.go:257] generating profile certs ...
	I1018 10:20:48.842093  420727 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/client.key
	I1018 10:20:48.842103  420727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/client.crt with IP's: []
	I1018 10:20:48.950242  420727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/client.crt ...
	I1018 10:20:48.950260  420727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/client.crt: {Name:mkdc6357013927c9c8dadd962eb1fc370d2b2771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:20:48.950481  420727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/client.key ...
	I1018 10:20:48.950492  420727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/client.key: {Name:mkfc22c1653ec61e91d75ea175611379abdd8eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:20:48.950636  420727 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.key.1a5d6ceb
	I1018 10:20:48.950650  420727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.crt.1a5d6ceb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 10:20:49.204348  420727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.crt.1a5d6ceb ...
	I1018 10:20:49.204363  420727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.crt.1a5d6ceb: {Name:mk6680c0b76d1b564a37e873620717c95ab0d1cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:20:49.204564  420727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.key.1a5d6ceb ...
	I1018 10:20:49.204572  420727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.key.1a5d6ceb: {Name:mk097acd6c212f41908acdb64fcc0b47a1eff990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:20:49.204655  420727 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.crt.1a5d6ceb -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.crt
	I1018 10:20:49.204738  420727 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.key.1a5d6ceb -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.key
	I1018 10:20:49.204791  420727 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/proxy-client.key
	I1018 10:20:49.204803  420727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/proxy-client.crt with IP's: []
	I1018 10:20:49.833757  420727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/proxy-client.crt ...
	I1018 10:20:49.833774  420727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/proxy-client.crt: {Name:mk52c9acaa75ff200ca7e6ed8ca0dd4a91cb03a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:20:49.833976  420727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/proxy-client.key ...
	I1018 10:20:49.833985  420727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/proxy-client.key: {Name:mka1b4f8fdecc5b556f5e21a8aa0b681672f8ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:20:49.834186  420727 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:20:49.834221  420727 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:20:49.834236  420727 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:20:49.834258  420727 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:20:49.834280  420727 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:20:49.834301  420727 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:20:49.834341  420727 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:20:49.834939  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:20:49.854744  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:20:49.873051  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:20:49.891561  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:20:49.909435  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 10:20:49.927856  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:20:49.946347  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:20:49.964000  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/scheduled-stop-023595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:20:49.981260  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:20:49.998478  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:20:50.015574  420727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:20:50.037405  420727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:20:50.052920  420727 ssh_runner.go:195] Run: openssl version
	I1018 10:20:50.059767  420727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:20:50.068582  420727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:20:50.072628  420727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:20:50.072700  420727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:20:50.114704  420727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:20:50.123929  420727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:20:50.132806  420727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:20:50.136743  420727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:20:50.136802  420727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:20:50.178228  420727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:20:50.187179  420727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:20:50.196033  420727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:20:50.199741  420727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:20:50.199800  420727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:20:50.241833  420727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:20:50.250428  420727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:20:50.254393  420727 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:20:50.254455  420727 kubeadm.go:400] StartCluster: {Name:scheduled-stop-023595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-023595 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:20:50.254516  420727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:20:50.254576  420727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:20:50.284592  420727 cri.go:89] found id: ""
	I1018 10:20:50.284667  420727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:20:50.292531  420727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:20:50.300159  420727 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:20:50.300213  420727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:20:50.308037  420727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:20:50.308051  420727 kubeadm.go:157] found existing configuration files:
	
	I1018 10:20:50.308105  420727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 10:20:50.315782  420727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:20:50.315842  420727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:20:50.323090  420727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 10:20:50.330643  420727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:20:50.330717  420727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:20:50.338076  420727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 10:20:50.346027  420727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:20:50.346080  420727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:20:50.353418  420727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 10:20:50.361299  420727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:20:50.361359  420727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:20:50.368682  420727 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:20:50.409830  420727 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 10:20:50.409881  420727 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:20:50.438304  420727 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:20:50.438371  420727 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:20:50.438407  420727 kubeadm.go:318] OS: Linux
	I1018 10:20:50.438453  420727 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:20:50.438502  420727 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:20:50.438557  420727 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:20:50.438616  420727 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:20:50.438665  420727 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:20:50.438714  420727 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:20:50.438760  420727 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:20:50.438813  420727 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:20:50.438861  420727 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:20:50.511623  420727 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:20:50.511781  420727 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:20:50.511888  420727 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 10:20:50.525582  420727 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:20:50.532387  420727 out.go:252]   - Generating certificates and keys ...
	I1018 10:20:50.532491  420727 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:20:50.532574  420727 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:20:50.798161  420727 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:20:51.006387  420727 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:20:52.426075  420727 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:20:52.901933  420727 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:20:53.547356  420727 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:20:53.547519  420727 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-023595] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:20:54.102816  420727 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:20:54.102970  420727 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-023595] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:20:55.260646  420727 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:20:55.437737  420727 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:20:55.806753  420727 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:20:55.806984  420727 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:20:57.137832  420727 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:20:57.953073  420727 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 10:20:58.602574  420727 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:20:58.845379  420727 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:20:59.090849  420727 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:20:59.091733  420727 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:20:59.094610  420727 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:20:59.098347  420727 out.go:252]   - Booting up control plane ...
	I1018 10:20:59.098454  420727 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:20:59.098534  420727 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:20:59.098602  420727 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:20:59.114371  420727 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:20:59.114682  420727 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 10:20:59.125449  420727 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 10:20:59.125546  420727 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:20:59.125587  420727 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:20:59.257278  420727 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 10:20:59.257401  420727 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 10:21:00.277655  420727 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.019949741s
	I1018 10:21:00.283266  420727 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 10:21:00.283969  420727 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 10:21:00.285974  420727 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 10:21:00.286736  420727 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 10:21:03.785604  420727 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.498450196s
	I1018 10:21:05.867635  420727 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.580094603s
	I1018 10:21:07.786667  420727 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.502226065s
	I1018 10:21:07.806280  420727 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:21:07.821422  420727 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:21:07.838598  420727 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:21:07.838814  420727 kubeadm.go:318] [mark-control-plane] Marking the node scheduled-stop-023595 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:21:07.853800  420727 kubeadm.go:318] [bootstrap-token] Using token: ihpb03.fx4nqm710a6uaszn
	I1018 10:21:07.856743  420727 out.go:252]   - Configuring RBAC rules ...
	I1018 10:21:07.856874  420727 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:21:07.861311  420727 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:21:07.876433  420727 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:21:07.883165  420727 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:21:07.889457  420727 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:21:07.894864  420727 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:21:08.195357  420727 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:21:08.620695  420727 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:21:09.195438  420727 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:21:09.196386  420727 kubeadm.go:318] 
	I1018 10:21:09.196471  420727 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:21:09.196476  420727 kubeadm.go:318] 
	I1018 10:21:09.196555  420727 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:21:09.196559  420727 kubeadm.go:318] 
	I1018 10:21:09.196584  420727 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:21:09.196645  420727 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:21:09.196697  420727 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:21:09.196700  420727 kubeadm.go:318] 
	I1018 10:21:09.196756  420727 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:21:09.196759  420727 kubeadm.go:318] 
	I1018 10:21:09.196807  420727 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:21:09.196811  420727 kubeadm.go:318] 
	I1018 10:21:09.196885  420727 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:21:09.196963  420727 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:21:09.197033  420727 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:21:09.197037  420727 kubeadm.go:318] 
	I1018 10:21:09.197124  420727 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:21:09.197226  420727 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:21:09.197229  420727 kubeadm.go:318] 
	I1018 10:21:09.197316  420727 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ihpb03.fx4nqm710a6uaszn \
	I1018 10:21:09.197422  420727 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:21:09.197442  420727 kubeadm.go:318] 	--control-plane 
	I1018 10:21:09.197445  420727 kubeadm.go:318] 
	I1018 10:21:09.197532  420727 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:21:09.197535  420727 kubeadm.go:318] 
	I1018 10:21:09.197619  420727 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ihpb03.fx4nqm710a6uaszn \
	I1018 10:21:09.197724  420727 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:21:09.202006  420727 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 10:21:09.202231  420727 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:21:09.202343  420727 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:21:09.202359  420727 cni.go:84] Creating CNI manager for ""
	I1018 10:21:09.202365  420727 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:21:09.205539  420727 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:21:09.208414  420727 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:21:09.212427  420727 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 10:21:09.212437  420727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:21:09.225569  420727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:21:09.508137  420727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:21:09.508268  420727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:21:09.508343  420727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-023595 minikube.k8s.io/updated_at=2025_10_18T10_21_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=scheduled-stop-023595 minikube.k8s.io/primary=true
	I1018 10:21:09.659787  420727 ops.go:34] apiserver oom_adj: -16
	I1018 10:21:09.659813  420727 kubeadm.go:1113] duration metric: took 151.596113ms to wait for elevateKubeSystemPrivileges
	I1018 10:21:09.660119  420727 kubeadm.go:402] duration metric: took 19.405664573s to StartCluster
	I1018 10:21:09.660138  420727 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:21:09.660195  420727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:21:09.660807  420727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:21:09.661002  420727 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:21:09.661103  420727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:21:09.661352  420727 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:21:09.661437  420727 config.go:182] Loaded profile config "scheduled-stop-023595": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:21:09.661450  420727 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-023595"
	I1018 10:21:09.661464  420727 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-023595"
	I1018 10:21:09.661487  420727 host.go:66] Checking if "scheduled-stop-023595" exists ...
	I1018 10:21:09.661487  420727 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-023595"
	I1018 10:21:09.661498  420727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-023595"
	I1018 10:21:09.661831  420727 cli_runner.go:164] Run: docker container inspect scheduled-stop-023595 --format={{.State.Status}}
	I1018 10:21:09.661945  420727 cli_runner.go:164] Run: docker container inspect scheduled-stop-023595 --format={{.State.Status}}
	I1018 10:21:09.664759  420727 out.go:179] * Verifying Kubernetes components...
	I1018 10:21:09.668655  420727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:21:09.711420  420727 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-023595"
	I1018 10:21:09.711451  420727 host.go:66] Checking if "scheduled-stop-023595" exists ...
	I1018 10:21:09.711885  420727 cli_runner.go:164] Run: docker container inspect scheduled-stop-023595 --format={{.State.Status}}
	I1018 10:21:09.716599  420727 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:21:09.720080  420727 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:21:09.720091  420727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:21:09.720150  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:21:09.758895  420727 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:21:09.758908  420727 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:21:09.758966  420727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-023595
	I1018 10:21:09.765012  420727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/scheduled-stop-023595/id_rsa Username:docker}
	I1018 10:21:09.795096  420727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/scheduled-stop-023595/id_rsa Username:docker}
	I1018 10:21:09.996973  420727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:21:10.004250  420727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:21:10.032299  420727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:21:10.154796  420727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:21:10.475819  420727 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 10:21:10.477355  420727 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:21:10.477486  420727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:21:10.607784  420727 api_server.go:72] duration metric: took 946.758454ms to wait for apiserver process to appear ...
	I1018 10:21:10.607796  420727 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:21:10.607821  420727 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:21:10.628384  420727 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 10:21:10.629598  420727 api_server.go:141] control plane version: v1.34.1
	I1018 10:21:10.629614  420727 api_server.go:131] duration metric: took 21.813194ms to wait for apiserver health ...
	I1018 10:21:10.629622  420727 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:21:10.629777  420727 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 10:21:10.632909  420727 system_pods.go:59] 5 kube-system pods found
	I1018 10:21:10.632928  420727 system_pods.go:61] "etcd-scheduled-stop-023595" [b7bbc447-82fe-424e-a11e-817d030ada3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:21:10.632936  420727 system_pods.go:61] "kube-apiserver-scheduled-stop-023595" [ce3c8839-8f70-443e-ad92-50e69526ff88] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:21:10.632944  420727 system_pods.go:61] "kube-controller-manager-scheduled-stop-023595" [84c71dde-1b3e-4606-a808-307e5cd0a188] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:21:10.632950  420727 system_pods.go:61] "kube-scheduler-scheduled-stop-023595" [dd266ce7-5191-4202-af42-852f982d0c93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:21:10.632955  420727 system_pods.go:61] "storage-provisioner" [675caf0b-c627-4214-a13d-b44200357008] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:21:10.632960  420727 system_pods.go:74] duration metric: took 3.333025ms to wait for pod list to return data ...
	I1018 10:21:10.632970  420727 kubeadm.go:586] duration metric: took 971.948267ms to wait for: map[apiserver:true system_pods:true]
	I1018 10:21:10.632981  420727 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:21:10.633168  420727 addons.go:514] duration metric: took 971.812417ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 10:21:10.635614  420727 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:21:10.635633  420727 node_conditions.go:123] node cpu capacity is 2
	I1018 10:21:10.635644  420727 node_conditions.go:105] duration metric: took 2.659911ms to run NodePressure ...
	I1018 10:21:10.635665  420727 start.go:241] waiting for startup goroutines ...
	I1018 10:21:10.980406  420727 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-023595" context rescaled to 1 replicas
	I1018 10:21:10.980427  420727 start.go:246] waiting for cluster config update ...
	I1018 10:21:10.980437  420727 start.go:255] writing updated cluster config ...
	I1018 10:21:10.980719  420727 ssh_runner.go:195] Run: rm -f paused
	I1018 10:21:11.039830  420727 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:21:11.043028  420727 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-023595" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.801340913Z" level=info msg="Creating container: kube-system/kube-apiserver-scheduled-stop-023595/kube-apiserver" id=4c6d1b14-c755-4d85-88c4-d21a4b341ee4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.802108664Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.802326573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.803918009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.807136605Z" level=info msg="Creating container: kube-system/kube-controller-manager-scheduled-stop-023595/kube-controller-manager" id=2d082bac-0183-432c-8689-97eb276d8b42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.808142942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.808736493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.814810302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.819775294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.822981666Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.823633701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.853805164Z" level=info msg="Created container 6dbf42b2b37e065f0fdafbc52dbd8b887ebfdc67df9f34c55b4235a3595a6364: kube-system/kube-scheduler-scheduled-stop-023595/kube-scheduler" id=fde18c0c-cfbc-47cf-a15d-2ea636fd6d07 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.855742581Z" level=info msg="Created container 505d874d018cdfbb32a528459ea7d1f4f16e7258e5db0dc2d73b4d49b56f0f6c: kube-system/kube-apiserver-scheduled-stop-023595/kube-apiserver" id=4c6d1b14-c755-4d85-88c4-d21a4b341ee4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.857915555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.858571259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.86076866Z" level=info msg="Starting container: 6dbf42b2b37e065f0fdafbc52dbd8b887ebfdc67df9f34c55b4235a3595a6364" id=f2df190f-7856-4a2b-976e-3e2e96a085bd name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.868486992Z" level=info msg="Starting container: 505d874d018cdfbb32a528459ea7d1f4f16e7258e5db0dc2d73b4d49b56f0f6c" id=8ea6a82e-268b-433f-aafa-f36affd54a9a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.871759684Z" level=info msg="Started container" PID=1237 containerID=6dbf42b2b37e065f0fdafbc52dbd8b887ebfdc67df9f34c55b4235a3595a6364 description=kube-system/kube-scheduler-scheduled-stop-023595/kube-scheduler id=f2df190f-7856-4a2b-976e-3e2e96a085bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=cbea70f3875ae4edf3d2b96396ca43fdf5a4c1b26220ed256c36baeabd2288c4
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.875534605Z" level=info msg="Started container" PID=1242 containerID=505d874d018cdfbb32a528459ea7d1f4f16e7258e5db0dc2d73b4d49b56f0f6c description=kube-system/kube-apiserver-scheduled-stop-023595/kube-apiserver id=8ea6a82e-268b-433f-aafa-f36affd54a9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=306db021e1f60fce68f7b19feeac9e7e999e995b757cea204c24040d9d0d6c3c
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.885613683Z" level=info msg="Created container 4b03b7a9ef97126368ad90589f9778de9e70c34ed4871f27a81b1525f2f20310: kube-system/kube-controller-manager-scheduled-stop-023595/kube-controller-manager" id=2d082bac-0183-432c-8689-97eb276d8b42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.881500576Z" level=info msg="Created container 3592e77a8871581437edf63da9f1ab3152d7d705d4219e427420042b3a2137fe: kube-system/etcd-scheduled-stop-023595/etcd" id=d46b27cb-25a3-4061-8754-1e9a62c41bda name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.890084602Z" level=info msg="Starting container: 4b03b7a9ef97126368ad90589f9778de9e70c34ed4871f27a81b1525f2f20310" id=885e8d91-bdb3-4756-8079-233e7b24164d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.892024522Z" level=info msg="Started container" PID=1252 containerID=4b03b7a9ef97126368ad90589f9778de9e70c34ed4871f27a81b1525f2f20310 description=kube-system/kube-controller-manager-scheduled-stop-023595/kube-controller-manager id=885e8d91-bdb3-4756-8079-233e7b24164d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f72eb2d5d47da554f949fc19bda47e844faa9845158c56575308115eda8a6c6b
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.893559474Z" level=info msg="Starting container: 3592e77a8871581437edf63da9f1ab3152d7d705d4219e427420042b3a2137fe" id=172c1c25-d231-4746-8a23-cf3aa1289f29 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:21:00 scheduled-stop-023595 crio[838]: time="2025-10-18T10:21:00.896225537Z" level=info msg="Started container" PID=1236 containerID=3592e77a8871581437edf63da9f1ab3152d7d705d4219e427420042b3a2137fe description=kube-system/etcd-scheduled-stop-023595/etcd id=172c1c25-d231-4746-8a23-cf3aa1289f29 name=/runtime.v1.RuntimeService/StartContainer sandboxID=006ed7b2bbd4d2c5da0d51f372b80766517e6c74ade1ca5f411331e391332108
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
	4b03b7a9ef971       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   0                   f72eb2d5d47da       kube-controller-manager-scheduled-stop-023595   kube-system
	505d874d018cd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            0                   306db021e1f60       kube-apiserver-scheduled-stop-023595            kube-system
	6dbf42b2b37e0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            0                   cbea70f3875ae       kube-scheduler-scheduled-stop-023595            kube-system
	3592e77a88715       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      0                   006ed7b2bbd4d       etcd-scheduled-stop-023595                      kube-system
	
	
	==> describe nodes <==
	Name:               scheduled-stop-023595
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-023595
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=scheduled-stop-023595
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_21_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:21:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-023595
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:21:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:21:08 +0000   Sat, 18 Oct 2025 10:21:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:21:08 +0000   Sat, 18 Oct 2025 10:21:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:21:08 +0000   Sat, 18 Oct 2025 10:21:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 10:21:08 +0000   Sat, 18 Oct 2025 10:21:01 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-023595
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                7da12a90-9f69-40ad-b558-068aafc5ec2b
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-023595                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5s
	  kube-system                 kube-apiserver-scheduled-stop-023595             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-023595    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-023595             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From     Message
	  ----     ------                   ----               ----     -------
	  Warning  CgroupV1                 12s                kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet  Node scheduled-stop-023595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet  Node scheduled-stop-023595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet  Node scheduled-stop-023595 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s                 kubelet  Starting kubelet.
	  Warning  CgroupV1                 4s                 kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4s                 kubelet  Node scheduled-stop-023595 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s                 kubelet  Node scheduled-stop-023595 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s                 kubelet  Node scheduled-stop-023595 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Oct18 09:56] overlayfs: idmapped layers are currently not supported
	[Oct18 09:57] overlayfs: idmapped layers are currently not supported
	[Oct18 09:58] overlayfs: idmapped layers are currently not supported
	[  +3.833371] overlayfs: idmapped layers are currently not supported
	[Oct18 10:00] overlayfs: idmapped layers are currently not supported
	[Oct18 10:01] overlayfs: idmapped layers are currently not supported
	[Oct18 10:02] overlayfs: idmapped layers are currently not supported
	[  +3.752225] overlayfs: idmapped layers are currently not supported
	[Oct18 10:03] overlayfs: idmapped layers are currently not supported
	[ +25.695966] overlayfs: idmapped layers are currently not supported
	[Oct18 10:05] overlayfs: idmapped layers are currently not supported
	[Oct18 10:10] overlayfs: idmapped layers are currently not supported
	[ +35.463301] overlayfs: idmapped layers are currently not supported
	[Oct18 10:11] overlayfs: idmapped layers are currently not supported
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3592e77a8871581437edf63da9f1ab3152d7d705d4219e427420042b3a2137fe] <==
	{"level":"warn","ts":"2025-10-18T10:21:04.518127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.536524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.574209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.577440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.593719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.614251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.628306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.644881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.663854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.680346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.697294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.715529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.731262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.754587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.790339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.794883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.812666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.840695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.881635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.899728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.914486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.953672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.966698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:04.982152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:21:05.038837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47230","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:21:12 up  2:03,  0 user,  load average: 2.33, 1.76, 1.87
	Linux scheduled-stop-023595 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [505d874d018cdfbb32a528459ea7d1f4f16e7258e5db0dc2d73b4d49b56f0f6c] <==
	I1018 10:21:05.818307       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:21:05.823175       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 10:21:05.863760       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 10:21:05.863849       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:21:05.864003       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 10:21:05.864185       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 10:21:05.864281       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 10:21:05.865584       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:21:05.866740       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:21:05.865637       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:21:05.865660       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 10:21:05.877048       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:21:06.587960       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 10:21:06.592932       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 10:21:06.593011       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:21:07.262852       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:21:07.310129       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:21:07.390799       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 10:21:07.398693       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 10:21:07.399796       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:21:07.406568       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:21:07.771147       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:21:08.603122       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:21:08.619875       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 10:21:08.632013       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [4b03b7a9ef97126368ad90589f9778de9e70c34ed4871f27a81b1525f2f20310] <==
	I1018 10:21:12.821717       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 10:21:12.825601       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:21:12.825877       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 10:21:12.827132       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 10:21:12.837476       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 10:21:12.837558       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:21:12.840362       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:21:12.841612       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 10:21:12.842831       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 10:21:12.842915       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 10:21:12.842952       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 10:21:12.842985       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 10:21:12.856531       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="scheduled-stop-023595" podCIDRs=["10.244.0.0/24"]
	I1018 10:21:12.867325       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 10:21:12.868423       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:21:12.868441       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:21:12.868477       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:21:12.868535       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 10:21:12.868618       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 10:21:12.868720       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="scheduled-stop-023595"
	I1018 10:21:12.868766       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 10:21:12.870685       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 10:21:12.870754       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 10:21:12.871627       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 10:21:12.876898       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-scheduler [6dbf42b2b37e065f0fdafbc52dbd8b887ebfdc67df9f34c55b4235a3595a6364] <==
	E1018 10:21:05.847562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 10:21:05.847631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 10:21:05.847718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 10:21:05.847826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 10:21:05.847900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 10:21:05.847965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 10:21:05.848083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 10:21:05.848156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 10:21:05.848242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1018 10:21:05.826126       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:21:05.848513       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1018 10:21:05.865969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 10:21:05.866104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 10:21:05.866215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 10:21:05.866225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 10:21:05.866344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 10:21:05.868677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 10:21:06.712802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 10:21:06.791584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 10:21:06.840642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 10:21:06.880501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 10:21:06.945544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 10:21:06.983195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 10:21:06.983196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 10:21:09.057787       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976217    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b701d95fecc55dced5fb72f4cd18614e-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-023595\" (UID: \"b701d95fecc55dced5fb72f4cd18614e\") " pod="kube-system/kube-controller-manager-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976272    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df0f213f347e68dac21b2725346504bf-kubeconfig\") pod \"kube-scheduler-scheduled-stop-023595\" (UID: \"df0f213f347e68dac21b2725346504bf\") " pod="kube-system/kube-scheduler-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976297    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e73f23e72e6874bd14bf0696d075f15-k8s-certs\") pod \"kube-apiserver-scheduled-stop-023595\" (UID: \"1e73f23e72e6874bd14bf0696d075f15\") " pod="kube-system/kube-apiserver-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976315    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b701d95fecc55dced5fb72f4cd18614e-ca-certs\") pod \"kube-controller-manager-scheduled-stop-023595\" (UID: \"b701d95fecc55dced5fb72f4cd18614e\") " pod="kube-system/kube-controller-manager-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976338    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b701d95fecc55dced5fb72f4cd18614e-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-023595\" (UID: \"b701d95fecc55dced5fb72f4cd18614e\") " pod="kube-system/kube-controller-manager-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976357    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b701d95fecc55dced5fb72f4cd18614e-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-023595\" (UID: \"b701d95fecc55dced5fb72f4cd18614e\") " pod="kube-system/kube-controller-manager-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976381    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/289c80d31b0e67fc8c89160d15ca56d2-etcd-certs\") pod \"etcd-scheduled-stop-023595\" (UID: \"289c80d31b0e67fc8c89160d15ca56d2\") " pod="kube-system/etcd-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976398    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/289c80d31b0e67fc8c89160d15ca56d2-etcd-data\") pod \"etcd-scheduled-stop-023595\" (UID: \"289c80d31b0e67fc8c89160d15ca56d2\") " pod="kube-system/etcd-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976414    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e73f23e72e6874bd14bf0696d075f15-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-023595\" (UID: \"1e73f23e72e6874bd14bf0696d075f15\") " pod="kube-system/kube-apiserver-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976433    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e73f23e72e6874bd14bf0696d075f15-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-023595\" (UID: \"1e73f23e72e6874bd14bf0696d075f15\") " pod="kube-system/kube-apiserver-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976451    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b701d95fecc55dced5fb72f4cd18614e-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-023595\" (UID: \"b701d95fecc55dced5fb72f4cd18614e\") " pod="kube-system/kube-controller-manager-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976466    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e73f23e72e6874bd14bf0696d075f15-ca-certs\") pod \"kube-apiserver-scheduled-stop-023595\" (UID: \"1e73f23e72e6874bd14bf0696d075f15\") " pod="kube-system/kube-apiserver-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976484    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e73f23e72e6874bd14bf0696d075f15-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-023595\" (UID: \"1e73f23e72e6874bd14bf0696d075f15\") " pod="kube-system/kube-apiserver-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976502    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b701d95fecc55dced5fb72f4cd18614e-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-023595\" (UID: \"b701d95fecc55dced5fb72f4cd18614e\") " pod="kube-system/kube-controller-manager-scheduled-stop-023595"
	Oct 18 10:21:08 scheduled-stop-023595 kubelet[1305]: I1018 10:21:08.976519    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b701d95fecc55dced5fb72f4cd18614e-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-023595\" (UID: \"b701d95fecc55dced5fb72f4cd18614e\") " pod="kube-system/kube-controller-manager-scheduled-stop-023595"
	Oct 18 10:21:09 scheduled-stop-023595 kubelet[1305]: I1018 10:21:09.524219    1305 apiserver.go:52] "Watching apiserver"
	Oct 18 10:21:09 scheduled-stop-023595 kubelet[1305]: I1018 10:21:09.575142    1305 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 10:21:09 scheduled-stop-023595 kubelet[1305]: I1018 10:21:09.651475    1305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-023595"
	Oct 18 10:21:09 scheduled-stop-023595 kubelet[1305]: E1018 10:21:09.764139    1305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-023595\" already exists" pod="kube-system/etcd-scheduled-stop-023595"
	Oct 18 10:21:09 scheduled-stop-023595 kubelet[1305]: I1018 10:21:09.877303    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-023595" podStartSLOduration=1.877283914 podStartE2EDuration="1.877283914s" podCreationTimestamp="2025-10-18 10:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:21:09.764390291 +0000 UTC m=+1.315625870" watchObservedRunningTime="2025-10-18 10:21:09.877283914 +0000 UTC m=+1.428519501"
	Oct 18 10:21:09 scheduled-stop-023595 kubelet[1305]: I1018 10:21:09.916973    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-023595" podStartSLOduration=1.9169517489999999 podStartE2EDuration="1.916951749s" podCreationTimestamp="2025-10-18 10:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:21:09.879811281 +0000 UTC m=+1.431046868" watchObservedRunningTime="2025-10-18 10:21:09.916951749 +0000 UTC m=+1.468187344"
	Oct 18 10:21:09 scheduled-stop-023595 kubelet[1305]: I1018 10:21:09.957772    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-023595" podStartSLOduration=1.957657781 podStartE2EDuration="1.957657781s" podCreationTimestamp="2025-10-18 10:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:21:09.917396108 +0000 UTC m=+1.468631695" watchObservedRunningTime="2025-10-18 10:21:09.957657781 +0000 UTC m=+1.508893450"
	Oct 18 10:21:09 scheduled-stop-023595 kubelet[1305]: I1018 10:21:09.957954    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-023595" podStartSLOduration=2.957946245 podStartE2EDuration="2.957946245s" podCreationTimestamp="2025-10-18 10:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:21:09.957867124 +0000 UTC m=+1.509102711" watchObservedRunningTime="2025-10-18 10:21:09.957946245 +0000 UTC m=+1.509181848"
	Oct 18 10:21:12 scheduled-stop-023595 kubelet[1305]: I1018 10:21:12.861568    1305 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 10:21:12 scheduled-stop-023595 kubelet[1305]: I1018 10:21:12.862186    1305 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-023595 -n scheduled-stop-023595
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-023595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-023595 describe pod storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-023595 describe pod storage-provisioner: exit status 1 (103.89231ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-023595 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-023595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-023595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-023595: (2.408327621s)
--- FAIL: TestScheduledStopUnix (40.32s)

                                                
                                    
x
+
TestPause/serial/Pause (9.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-019243 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-019243 --alsologtostderr -v=5: exit status 80 (2.647566433s)

                                                
                                                
-- stdout --
	* Pausing node pause-019243 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:23:31.994005  437164 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:23:31.994128  437164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:23:31.994133  437164 out.go:374] Setting ErrFile to fd 2...
	I1018 10:23:31.994139  437164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:23:31.994388  437164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:23:31.994620  437164 out.go:368] Setting JSON to false
	I1018 10:23:31.994640  437164 mustload.go:65] Loading cluster: pause-019243
	I1018 10:23:31.995053  437164 config.go:182] Loaded profile config "pause-019243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:23:31.995505  437164 cli_runner.go:164] Run: docker container inspect pause-019243 --format={{.State.Status}}
	I1018 10:23:32.017568  437164 host.go:66] Checking if "pause-019243" exists ...
	I1018 10:23:32.017923  437164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:23:32.106829  437164 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 10:23:32.096865353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:23:32.107498  437164 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-019243 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 10:23:32.113107  437164 out.go:179] * Pausing node pause-019243 ... 
	I1018 10:23:32.115993  437164 host.go:66] Checking if "pause-019243" exists ...
	I1018 10:23:32.116491  437164 ssh_runner.go:195] Run: systemctl --version
	I1018 10:23:32.116541  437164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:32.141682  437164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:32.244489  437164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:23:32.258125  437164 pause.go:52] kubelet running: true
	I1018 10:23:32.258257  437164 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:23:32.516754  437164 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:23:32.516918  437164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:23:32.611750  437164 cri.go:89] found id: "d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93"
	I1018 10:23:32.611823  437164 cri.go:89] found id: "e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18"
	I1018 10:23:32.611853  437164 cri.go:89] found id: "06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1"
	I1018 10:23:32.611872  437164 cri.go:89] found id: "e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333"
	I1018 10:23:32.611890  437164 cri.go:89] found id: "15128da41ead8115fa2f84a7672dd4abe119002449d59f70960273bd6e459027"
	I1018 10:23:32.611908  437164 cri.go:89] found id: "09054465f840b5bd38d7d4516d56642a1c6df2c6eb394ff6de2428c47a2a957d"
	I1018 10:23:32.611936  437164 cri.go:89] found id: "d28ad7321d63b96d8f407c66665665893126b46543ebff1b3dbf9af6d6c2dfa7"
	I1018 10:23:32.611954  437164 cri.go:89] found id: "0ae979551b18ee12476387ea61edbf996097504d4837b5af82bca211b75cbe5c"
	I1018 10:23:32.611973  437164 cri.go:89] found id: "006738fd96b2a20ec03049da106472c554433b20145062583aebec83cb373d89"
	I1018 10:23:32.612000  437164 cri.go:89] found id: "3cc277a3092b1996e080de34cfec6f38d30c32c1fb580882942cb48454483741"
	I1018 10:23:32.612024  437164 cri.go:89] found id: "8600f9e89059224c9e5954596534b99d00dc73824984fc82abde77714c802a01"
	I1018 10:23:32.612040  437164 cri.go:89] found id: "888e1c745ae86edf3cfb0b8124645f0fd6da8c2376869b3e66ea6f0930abf181"
	I1018 10:23:32.612060  437164 cri.go:89] found id: "b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5"
	I1018 10:23:32.612095  437164 cri.go:89] found id: "d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742"
	I1018 10:23:32.612113  437164 cri.go:89] found id: ""
	I1018 10:23:32.612188  437164 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:23:32.626977  437164 retry.go:31] will retry after 147.627356ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:23:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:23:32.775432  437164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:23:32.788419  437164 pause.go:52] kubelet running: false
	I1018 10:23:32.788503  437164 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:23:32.923657  437164 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:23:32.923735  437164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:23:32.995578  437164 cri.go:89] found id: "d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93"
	I1018 10:23:32.995605  437164 cri.go:89] found id: "e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18"
	I1018 10:23:32.995610  437164 cri.go:89] found id: "06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1"
	I1018 10:23:32.995614  437164 cri.go:89] found id: "e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333"
	I1018 10:23:32.995617  437164 cri.go:89] found id: "15128da41ead8115fa2f84a7672dd4abe119002449d59f70960273bd6e459027"
	I1018 10:23:32.995638  437164 cri.go:89] found id: "09054465f840b5bd38d7d4516d56642a1c6df2c6eb394ff6de2428c47a2a957d"
	I1018 10:23:32.995642  437164 cri.go:89] found id: "d28ad7321d63b96d8f407c66665665893126b46543ebff1b3dbf9af6d6c2dfa7"
	I1018 10:23:32.995645  437164 cri.go:89] found id: "0ae979551b18ee12476387ea61edbf996097504d4837b5af82bca211b75cbe5c"
	I1018 10:23:32.995653  437164 cri.go:89] found id: "006738fd96b2a20ec03049da106472c554433b20145062583aebec83cb373d89"
	I1018 10:23:32.995659  437164 cri.go:89] found id: "3cc277a3092b1996e080de34cfec6f38d30c32c1fb580882942cb48454483741"
	I1018 10:23:32.995669  437164 cri.go:89] found id: "8600f9e89059224c9e5954596534b99d00dc73824984fc82abde77714c802a01"
	I1018 10:23:32.995672  437164 cri.go:89] found id: "888e1c745ae86edf3cfb0b8124645f0fd6da8c2376869b3e66ea6f0930abf181"
	I1018 10:23:32.995675  437164 cri.go:89] found id: "b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5"
	I1018 10:23:32.995678  437164 cri.go:89] found id: "d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742"
	I1018 10:23:32.995682  437164 cri.go:89] found id: ""
	I1018 10:23:32.995735  437164 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:23:33.006355  437164 retry.go:31] will retry after 283.692638ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:23:33Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:23:33.290939  437164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:23:33.309991  437164 pause.go:52] kubelet running: false
	I1018 10:23:33.310113  437164 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:23:33.553517  437164 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:23:33.553601  437164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:23:33.657090  437164 cri.go:89] found id: "d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93"
	I1018 10:23:33.657117  437164 cri.go:89] found id: "e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18"
	I1018 10:23:33.657122  437164 cri.go:89] found id: "06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1"
	I1018 10:23:33.657126  437164 cri.go:89] found id: "e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333"
	I1018 10:23:33.657130  437164 cri.go:89] found id: "15128da41ead8115fa2f84a7672dd4abe119002449d59f70960273bd6e459027"
	I1018 10:23:33.657134  437164 cri.go:89] found id: "09054465f840b5bd38d7d4516d56642a1c6df2c6eb394ff6de2428c47a2a957d"
	I1018 10:23:33.657137  437164 cri.go:89] found id: "d28ad7321d63b96d8f407c66665665893126b46543ebff1b3dbf9af6d6c2dfa7"
	I1018 10:23:33.657141  437164 cri.go:89] found id: "0ae979551b18ee12476387ea61edbf996097504d4837b5af82bca211b75cbe5c"
	I1018 10:23:33.657144  437164 cri.go:89] found id: "006738fd96b2a20ec03049da106472c554433b20145062583aebec83cb373d89"
	I1018 10:23:33.657150  437164 cri.go:89] found id: "3cc277a3092b1996e080de34cfec6f38d30c32c1fb580882942cb48454483741"
	I1018 10:23:33.657153  437164 cri.go:89] found id: "8600f9e89059224c9e5954596534b99d00dc73824984fc82abde77714c802a01"
	I1018 10:23:33.657157  437164 cri.go:89] found id: "888e1c745ae86edf3cfb0b8124645f0fd6da8c2376869b3e66ea6f0930abf181"
	I1018 10:23:33.657160  437164 cri.go:89] found id: "b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5"
	I1018 10:23:33.657174  437164 cri.go:89] found id: "d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742"
	I1018 10:23:33.657217  437164 cri.go:89] found id: ""
	I1018 10:23:33.657270  437164 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:23:33.667957  437164 retry.go:31] will retry after 433.740809ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:23:33Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:23:34.102353  437164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:23:34.120652  437164 pause.go:52] kubelet running: false
	I1018 10:23:34.120717  437164 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:23:34.385767  437164 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:23:34.385849  437164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:23:34.523575  437164 cri.go:89] found id: "d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93"
	I1018 10:23:34.523602  437164 cri.go:89] found id: "e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18"
	I1018 10:23:34.523608  437164 cri.go:89] found id: "06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1"
	I1018 10:23:34.523612  437164 cri.go:89] found id: "e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333"
	I1018 10:23:34.523615  437164 cri.go:89] found id: "15128da41ead8115fa2f84a7672dd4abe119002449d59f70960273bd6e459027"
	I1018 10:23:34.523619  437164 cri.go:89] found id: "09054465f840b5bd38d7d4516d56642a1c6df2c6eb394ff6de2428c47a2a957d"
	I1018 10:23:34.523622  437164 cri.go:89] found id: "d28ad7321d63b96d8f407c66665665893126b46543ebff1b3dbf9af6d6c2dfa7"
	I1018 10:23:34.523625  437164 cri.go:89] found id: "0ae979551b18ee12476387ea61edbf996097504d4837b5af82bca211b75cbe5c"
	I1018 10:23:34.523628  437164 cri.go:89] found id: "006738fd96b2a20ec03049da106472c554433b20145062583aebec83cb373d89"
	I1018 10:23:34.523634  437164 cri.go:89] found id: "3cc277a3092b1996e080de34cfec6f38d30c32c1fb580882942cb48454483741"
	I1018 10:23:34.523638  437164 cri.go:89] found id: "8600f9e89059224c9e5954596534b99d00dc73824984fc82abde77714c802a01"
	I1018 10:23:34.523641  437164 cri.go:89] found id: "888e1c745ae86edf3cfb0b8124645f0fd6da8c2376869b3e66ea6f0930abf181"
	I1018 10:23:34.523644  437164 cri.go:89] found id: "b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5"
	I1018 10:23:34.523648  437164 cri.go:89] found id: "d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742"
	I1018 10:23:34.523651  437164 cri.go:89] found id: ""
	I1018 10:23:34.523698  437164 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:23:34.543629  437164 out.go:203] 
	W1018 10:23:34.546685  437164 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:23:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:23:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 10:23:34.546711  437164 out.go:285] * 
	* 
	W1018 10:23:34.555325  437164 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 10:23:34.558195  437164 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-019243 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-019243
helpers_test.go:243: (dbg) docker inspect pause-019243:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4",
	        "Created": "2025-10-18T10:21:38.142725934Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 426132,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:21:38.229548709Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4/hosts",
	        "LogPath": "/var/lib/docker/containers/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4-json.log",
	        "Name": "/pause-019243",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-019243:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-019243",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4",
	                "LowerDir": "/var/lib/docker/overlay2/e91ce86bb9ac9a31e1e05e6b951a98cd31e0b38c4c09f267221d71fa8428eaf4-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e91ce86bb9ac9a31e1e05e6b951a98cd31e0b38c4c09f267221d71fa8428eaf4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e91ce86bb9ac9a31e1e05e6b951a98cd31e0b38c4c09f267221d71fa8428eaf4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e91ce86bb9ac9a31e1e05e6b951a98cd31e0b38c4c09f267221d71fa8428eaf4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-019243",
	                "Source": "/var/lib/docker/volumes/pause-019243/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-019243",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-019243",
	                "name.minikube.sigs.k8s.io": "pause-019243",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f2bb97a78fa915e21f95a4230b92bc400a83022bf6e9873eba72f37d35500625",
	            "SandboxKey": "/var/run/docker/netns/f2bb97a78fa9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-019243": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:a0:f0:d1:cc:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a9c844bd2f37f27a691b49d505aa949bddd3153af738f97af2bb8079b116b6a",
	                    "EndpointID": "692428ed49b6524aed89a8a22c1ab5299754ddea7022ba74e1d94dfee416e1ff",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-019243",
	                        "3902a816561a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-019243 -n pause-019243
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-019243 -n pause-019243: exit status 2 (560.995564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-019243 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-019243 logs -n 25: (2.29293653s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-968310                                                                                                │ test-preload-968310         │ jenkins │ v1.37.0 │ 18 Oct 25 10:20 UTC │ 18 Oct 25 10:20 UTC │
	│ start   │ -p scheduled-stop-023595 --memory=3072 --driver=docker  --container-runtime=crio                                      │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:20 UTC │ 18 Oct 25 10:21 UTC │
	│ stop    │ -p scheduled-stop-023595 --schedule 5m                                                                                │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 5m                                                                                │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 5m                                                                                │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 15s                                                                               │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 15s                                                                               │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 15s                                                                               │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ delete  │ -p scheduled-stop-023595                                                                                              │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │ 18 Oct 25 10:21 UTC │
	│ start   │ -p insufficient-storage-971499 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-971499 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ delete  │ -p insufficient-storage-971499                                                                                        │ insufficient-storage-971499 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │ 18 Oct 25 10:21 UTC │
	│ start   │ -p pause-019243 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-019243                │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p NoKubernetes-403599 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ start   │ -p NoKubernetes-403599 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p NoKubernetes-403599 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ delete  │ -p NoKubernetes-403599                                                                                                │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p NoKubernetes-403599 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ ssh     │ -p NoKubernetes-403599 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │                     │
	│ stop    │ -p NoKubernetes-403599                                                                                                │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p NoKubernetes-403599 --driver=docker  --container-runtime=crio                                                      │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ ssh     │ -p NoKubernetes-403599 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │                     │
	│ delete  │ -p NoKubernetes-403599                                                                                                │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p missing-upgrade-495276 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-495276      │ jenkins │ v1.32.0 │ 18 Oct 25 10:22 UTC │                     │
	│ start   │ -p pause-019243 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-019243                │ jenkins │ v1.37.0 │ 18 Oct 25 10:23 UTC │ 18 Oct 25 10:23 UTC │
	│ pause   │ -p pause-019243 --alsologtostderr -v=5                                                                                │ pause-019243                │ jenkins │ v1.37.0 │ 18 Oct 25 10:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:23:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:23:00.055519  434303 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:23:00.057671  434303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:23:00.057686  434303 out.go:374] Setting ErrFile to fd 2...
	I1018 10:23:00.057731  434303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:23:00.058567  434303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:23:00.060214  434303 out.go:368] Setting JSON to false
	I1018 10:23:00.074037  434303 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7530,"bootTime":1760775450,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:23:00.074465  434303 start.go:141] virtualization:  
	I1018 10:23:00.090251  434303 out.go:179] * [pause-019243] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:23:00.104082  434303 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:23:00.105045  434303 notify.go:220] Checking for updates...
	I1018 10:23:00.134082  434303 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:23:00.168477  434303 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:23:00.173280  434303 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:23:00.186648  434303 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:23:00.190057  434303 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:23:00.205775  434303 config.go:182] Loaded profile config "pause-019243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:23:00.209482  434303 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:23:00.309366  434303 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:23:00.309533  434303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:23:00.413010  434303 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2025-10-18 10:23:00.395744205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:23:00.413135  434303 docker.go:318] overlay module found
	I1018 10:23:00.416962  434303 out.go:179] * Using the docker driver based on existing profile
	I1018 10:23:00.420015  434303 start.go:305] selected driver: docker
	I1018 10:23:00.420043  434303 start.go:925] validating driver "docker" against &{Name:pause-019243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:23:00.420202  434303 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:23:00.420316  434303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:23:00.540220  434303 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2025-10-18 10:23:00.519746493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:23:00.540720  434303 cni.go:84] Creating CNI manager for ""
	I1018 10:23:00.540780  434303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:23:00.540823  434303 start.go:349] cluster config:
	{Name:pause-019243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:23:00.544247  434303 out.go:179] * Starting "pause-019243" primary control-plane node in "pause-019243" cluster
	I1018 10:23:00.547306  434303 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:23:00.550945  434303 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:23:00.553833  434303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:23:00.553916  434303 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:23:00.553934  434303 cache.go:58] Caching tarball of preloaded images
	I1018 10:23:00.554027  434303 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:23:00.554041  434303 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:23:00.554195  434303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/config.json ...
	I1018 10:23:00.554468  434303 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:23:00.586353  434303 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:23:00.586379  434303 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:23:00.586394  434303 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:23:00.586418  434303 start.go:360] acquireMachinesLock for pause-019243: {Name:mk05462e9af1aedb94ca598a536cc4d42d3c7af9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:23:00.586478  434303 start.go:364] duration metric: took 38.867µs to acquireMachinesLock for "pause-019243"
	I1018 10:23:00.586502  434303 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:23:00.586511  434303 fix.go:54] fixHost starting: 
	I1018 10:23:00.586762  434303 cli_runner.go:164] Run: docker container inspect pause-019243 --format={{.State.Status}}
	I1018 10:23:00.621299  434303 fix.go:112] recreateIfNeeded on pause-019243: state=Running err=<nil>
	W1018 10:23:00.621346  434303 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 10:23:00.624780  434303 out.go:252] * Updating the running docker "pause-019243" container ...
	I1018 10:23:00.624822  434303 machine.go:93] provisionDockerMachine start ...
	I1018 10:23:00.624916  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:00.644351  434303 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:00.644681  434303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1018 10:23:00.644696  434303 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:23:00.821702  434303 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-019243
	
	I1018 10:23:00.821794  434303 ubuntu.go:182] provisioning hostname "pause-019243"
	I1018 10:23:00.821883  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:00.843231  434303 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:00.843552  434303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1018 10:23:00.843562  434303 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-019243 && echo "pause-019243" | sudo tee /etc/hostname
	I1018 10:23:01.019782  434303 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-019243
	
	I1018 10:23:01.019929  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:01.044597  434303 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:01.045095  434303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1018 10:23:01.045617  434303 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-019243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-019243/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-019243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:23:01.214779  434303 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:23:01.214847  434303 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:23:01.214892  434303 ubuntu.go:190] setting up certificates
	I1018 10:23:01.214917  434303 provision.go:84] configureAuth start
	I1018 10:23:01.215002  434303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-019243
	I1018 10:23:01.246910  434303 provision.go:143] copyHostCerts
	I1018 10:23:01.246973  434303 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:23:01.246983  434303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:23:01.247054  434303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:23:01.247344  434303 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:23:01.247362  434303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:23:01.247612  434303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:23:01.247947  434303 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:23:01.247960  434303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:23:01.248176  434303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:23:01.248276  434303 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.pause-019243 san=[127.0.0.1 192.168.76.2 localhost minikube pause-019243]
	I1018 10:23:01.553206  434303 provision.go:177] copyRemoteCerts
	I1018 10:23:01.553568  434303 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:23:01.553642  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:01.576564  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:01.691151  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:23:01.723228  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 10:23:01.741039  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:23:01.758699  434303 provision.go:87] duration metric: took 543.744817ms to configureAuth
	I1018 10:23:01.758722  434303 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:23:01.758942  434303 config.go:182] Loaded profile config "pause-019243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:23:01.759049  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:01.779735  434303 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:01.780038  434303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1018 10:23:01.780059  434303 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:23:07.087909  434147 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from cached tarball
	I1018 10:23:07.087936  434147 cache.go:194] Successfully downloaded all kic artifacts
	I1018 10:23:07.087991  434147 start.go:365] acquiring machines lock for missing-upgrade-495276: {Name:mk50b1d211b8a40bcfaec996f67525cfafb066cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:23:07.088100  434147 start.go:369] acquired machines lock for "missing-upgrade-495276" in 90.929µs
	I1018 10:23:07.088123  434147 start.go:93] Provisioning new machine with config: &{Name:missing-upgrade-495276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-495276 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:23:07.088191  434147 start.go:125] createHost starting for "" (driver="docker")
	I1018 10:23:07.091888  434147 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:23:07.092148  434147 start.go:159] libmachine.API.Create for "missing-upgrade-495276" (driver="docker")
	I1018 10:23:07.092167  434147 client.go:168] LocalClient.Create starting
	I1018 10:23:07.092227  434147 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:23:07.092259  434147 main.go:141] libmachine: Decoding PEM data...
	I1018 10:23:07.092272  434147 main.go:141] libmachine: Parsing certificate...
	I1018 10:23:07.092326  434147 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:23:07.092345  434147 main.go:141] libmachine: Decoding PEM data...
	I1018 10:23:07.092355  434147 main.go:141] libmachine: Parsing certificate...
	I1018 10:23:07.092701  434147 cli_runner.go:164] Run: docker network inspect missing-upgrade-495276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:23:07.114920  434147 cli_runner.go:211] docker network inspect missing-upgrade-495276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:23:07.114986  434147 network_create.go:281] running [docker network inspect missing-upgrade-495276] to gather additional debugging logs...
	I1018 10:23:07.115007  434147 cli_runner.go:164] Run: docker network inspect missing-upgrade-495276
	W1018 10:23:07.133293  434147 cli_runner.go:211] docker network inspect missing-upgrade-495276 returned with exit code 1
	I1018 10:23:07.133314  434147 network_create.go:284] error running [docker network inspect missing-upgrade-495276]: docker network inspect missing-upgrade-495276: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-495276 not found
	I1018 10:23:07.133325  434147 network_create.go:286] output of [docker network inspect missing-upgrade-495276]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-495276 not found
	
	** /stderr **
	I1018 10:23:07.133419  434147 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:23:07.152298  434147 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:23:07.152793  434147 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:23:07.153292  434147 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:23:07.153645  434147 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4a9c844bd2f3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:13:da:08:e1:42} reservation:<nil>}
	I1018 10:23:07.154123  434147 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025a5a20}
	I1018 10:23:07.154156  434147 network_create.go:124] attempt to create docker network missing-upgrade-495276 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 10:23:07.154210  434147 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-495276 missing-upgrade-495276
	I1018 10:23:07.218493  434147 network_create.go:108] docker network missing-upgrade-495276 192.168.85.0/24 created
	I1018 10:23:07.218522  434147 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-495276" container
	I1018 10:23:07.218600  434147 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:23:07.238929  434147 cli_runner.go:164] Run: docker volume create missing-upgrade-495276 --label name.minikube.sigs.k8s.io=missing-upgrade-495276 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:23:07.276542  434147 oci.go:103] Successfully created a docker volume missing-upgrade-495276
	I1018 10:23:07.276624  434147 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-495276-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-495276 --entrypoint /usr/bin/test -v missing-upgrade-495276:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1018 10:23:08.505237  434147 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-495276-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-495276 --entrypoint /usr/bin/test -v missing-upgrade-495276:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.228578236s)
	I1018 10:23:08.505255  434147 oci.go:107] Successfully prepared a docker volume missing-upgrade-495276
	I1018 10:23:08.505277  434147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 10:23:08.505296  434147 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 10:23:08.505369  434147 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-495276:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 10:23:07.164181  434303 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:23:07.164204  434303 machine.go:96] duration metric: took 6.539373436s to provisionDockerMachine
	I1018 10:23:07.164215  434303 start.go:293] postStartSetup for "pause-019243" (driver="docker")
	I1018 10:23:07.164226  434303 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:23:07.164298  434303 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:23:07.164343  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:07.200169  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:07.325167  434303 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:23:07.336449  434303 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:23:07.336476  434303 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:23:07.336487  434303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:23:07.336545  434303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:23:07.336624  434303 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:23:07.336731  434303 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:23:07.345984  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:23:07.366757  434303 start.go:296] duration metric: took 202.525985ms for postStartSetup
	I1018 10:23:07.366871  434303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:23:07.366920  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:07.385337  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:07.515631  434303 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:23:07.521084  434303 fix.go:56] duration metric: took 6.93456577s for fixHost
	I1018 10:23:07.521110  434303 start.go:83] releasing machines lock for "pause-019243", held for 6.934619727s
	I1018 10:23:07.521235  434303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-019243
	I1018 10:23:07.538218  434303 ssh_runner.go:195] Run: cat /version.json
	I1018 10:23:07.538280  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:07.538532  434303 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:23:07.538586  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:07.556540  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:07.570730  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:07.656825  434303 ssh_runner.go:195] Run: systemctl --version
	I1018 10:23:07.825102  434303 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:23:07.911937  434303 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:23:07.919604  434303 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:23:07.919672  434303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:23:07.931314  434303 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:23:07.931340  434303 start.go:495] detecting cgroup driver to use...
	I1018 10:23:07.931372  434303 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:23:07.931423  434303 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:23:07.948989  434303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:23:07.974722  434303 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:23:07.974829  434303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:23:07.993041  434303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:23:08.007973  434303 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:23:08.215415  434303 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:23:08.402601  434303 docker.go:234] disabling docker service ...
	I1018 10:23:08.402722  434303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:23:08.426461  434303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:23:08.451760  434303 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:23:08.685806  434303 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:23:08.874499  434303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:23:08.889757  434303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:23:08.904946  434303 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:23:08.905009  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.915277  434303 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:23:08.915340  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.924969  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.934777  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.944589  434303 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:23:08.953687  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.963618  434303 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.972588  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.987262  434303 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:23:08.995938  434303 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:23:09.004528  434303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:23:09.178925  434303 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:23:09.791005  434303 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:23:09.791084  434303 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:23:09.796393  434303 start.go:563] Will wait 60s for crictl version
	I1018 10:23:09.796534  434303 ssh_runner.go:195] Run: which crictl
	I1018 10:23:09.802862  434303 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:23:09.850312  434303 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:23:09.850478  434303 ssh_runner.go:195] Run: crio --version
	I1018 10:23:09.904797  434303 ssh_runner.go:195] Run: crio --version
	I1018 10:23:09.943545  434303 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:23:09.944822  434303 cli_runner.go:164] Run: docker network inspect pause-019243 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:23:09.968789  434303 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:23:09.975971  434303 kubeadm.go:883] updating cluster {Name:pause-019243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:23:09.976122  434303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:23:09.976182  434303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:23:10.019327  434303 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:23:10.019358  434303 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:23:10.019421  434303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:23:10.052539  434303 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:23:10.052567  434303 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:23:10.052577  434303 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 10:23:10.052710  434303 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-019243 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:23:10.052821  434303 ssh_runner.go:195] Run: crio config
	I1018 10:23:10.128180  434303 cni.go:84] Creating CNI manager for ""
	I1018 10:23:10.128205  434303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:23:10.128228  434303 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:23:10.128253  434303 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-019243 NodeName:pause-019243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:23:10.128414  434303 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-019243"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:23:10.128503  434303 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:23:10.137859  434303 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:23:10.137930  434303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:23:10.148188  434303 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1018 10:23:10.162300  434303 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:23:10.175867  434303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 10:23:10.190247  434303 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:23:10.194898  434303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:23:10.374979  434303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:23:10.389621  434303 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243 for IP: 192.168.76.2
	I1018 10:23:10.389653  434303 certs.go:195] generating shared ca certs ...
	I1018 10:23:10.389669  434303 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:10.389824  434303 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:23:10.389892  434303 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:23:10.389909  434303 certs.go:257] generating profile certs ...
	I1018 10:23:10.390011  434303 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.key
	I1018 10:23:10.390096  434303 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/apiserver.key.1256d678
	I1018 10:23:10.390139  434303 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/proxy-client.key
	I1018 10:23:10.390274  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:23:10.390315  434303 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:23:10.390328  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:23:10.390353  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:23:10.390392  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:23:10.390419  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:23:10.390474  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:23:10.391156  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:23:10.409287  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:23:10.426764  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:23:10.443885  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:23:10.460999  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 10:23:10.478147  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:23:10.495468  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:23:10.512755  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:23:10.530881  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:23:10.548419  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:23:10.567454  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:23:10.585210  434303 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:23:10.598110  434303 ssh_runner.go:195] Run: openssl version
	I1018 10:23:10.607827  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:23:10.622905  434303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:23:10.627402  434303 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:23:10.627484  434303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:23:10.670153  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:23:10.679145  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:23:10.688955  434303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:23:10.693733  434303 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:23:10.693882  434303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:23:10.741222  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:23:10.751475  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:23:10.760664  434303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:10.765331  434303 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:10.765443  434303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:10.815592  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:23:10.825481  434303 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:23:10.829873  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:23:10.885286  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:23:10.930542  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:23:10.982157  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:23:11.028309  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:23:11.079256  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:23:11.126277  434303 kubeadm.go:400] StartCluster: {Name:pause-019243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:23:11.126468  434303 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:23:11.126563  434303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:23:11.176252  434303 cri.go:89] found id: "0ae979551b18ee12476387ea61edbf996097504d4837b5af82bca211b75cbe5c"
	I1018 10:23:11.176330  434303 cri.go:89] found id: "006738fd96b2a20ec03049da106472c554433b20145062583aebec83cb373d89"
	I1018 10:23:11.176348  434303 cri.go:89] found id: "3cc277a3092b1996e080de34cfec6f38d30c32c1fb580882942cb48454483741"
	I1018 10:23:11.176368  434303 cri.go:89] found id: "8600f9e89059224c9e5954596534b99d00dc73824984fc82abde77714c802a01"
	I1018 10:23:11.176399  434303 cri.go:89] found id: "888e1c745ae86edf3cfb0b8124645f0fd6da8c2376869b3e66ea6f0930abf181"
	I1018 10:23:11.176423  434303 cri.go:89] found id: "b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5"
	I1018 10:23:11.176446  434303 cri.go:89] found id: "d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742"
	I1018 10:23:11.176464  434303 cri.go:89] found id: ""
	I1018 10:23:11.176542  434303 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:23:11.202678  434303 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:23:11Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:23:11.202820  434303 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:23:11.218641  434303 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:23:11.218712  434303 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:23:11.218791  434303 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:23:11.228045  434303 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:23:11.228714  434303 kubeconfig.go:125] found "pause-019243" server: "https://192.168.76.2:8443"
	I1018 10:23:11.229483  434303 kapi.go:59] client config for pause-019243: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.key", CAFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 10:23:11.230096  434303 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 10:23:11.230314  434303 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 10:23:11.230338  434303 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 10:23:11.230359  434303 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 10:23:11.230390  434303 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 10:23:11.230788  434303 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:23:11.246355  434303 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 10:23:11.246429  434303 kubeadm.go:601] duration metric: took 27.698107ms to restartPrimaryControlPlane
	I1018 10:23:11.246453  434303 kubeadm.go:402] duration metric: took 120.185716ms to StartCluster
	I1018 10:23:11.246496  434303 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:11.246603  434303 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:23:11.247303  434303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:11.247591  434303 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:23:11.248004  434303 config.go:182] Loaded profile config "pause-019243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:23:11.248093  434303 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:23:11.251355  434303 out.go:179] * Verifying Kubernetes components...
	I1018 10:23:11.253298  434303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:23:11.253403  434303 out.go:179] * Enabled addons: 
	I1018 10:23:12.878195  434147 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-495276:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.37278546s)
	I1018 10:23:12.878218  434147 kic.go:203] duration metric: took 4.372920 seconds to extract preloaded images to volume
	W1018 10:23:12.878364  434147 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:23:12.878458  434147 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:23:12.935590  434147 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-495276 --name missing-upgrade-495276 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-495276 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-495276 --network missing-upgrade-495276 --ip 192.168.85.2 --volume missing-upgrade-495276:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1018 10:23:13.396026  434147 cli_runner.go:164] Run: docker container inspect missing-upgrade-495276 --format={{.State.Running}}
	I1018 10:23:13.426676  434147 cli_runner.go:164] Run: docker container inspect missing-upgrade-495276 --format={{.State.Status}}
	I1018 10:23:13.449474  434147 cli_runner.go:164] Run: docker exec missing-upgrade-495276 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:23:13.522041  434147 oci.go:144] the created container "missing-upgrade-495276" has a running status.
	I1018 10:23:13.522061  434147 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa...
	I1018 10:23:14.100306  434147 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:23:14.132242  434147 cli_runner.go:164] Run: docker container inspect missing-upgrade-495276 --format={{.State.Status}}
	I1018 10:23:14.161969  434147 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:23:14.161981  434147 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-495276 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:23:11.254544  434303 addons.go:514] duration metric: took 6.434015ms for enable addons: enabled=[]
	I1018 10:23:11.429765  434303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:23:11.445879  434303 node_ready.go:35] waiting up to 6m0s for node "pause-019243" to be "Ready" ...
	W1018 10:23:13.447715  434303 node_ready.go:55] error getting node "pause-019243" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/pause-019243": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 10:23:14.257536  434147 cli_runner.go:164] Run: docker container inspect missing-upgrade-495276 --format={{.State.Status}}
	I1018 10:23:14.285387  434147 machine.go:88] provisioning docker machine ...
	I1018 10:23:14.285408  434147 ubuntu.go:169] provisioning hostname "missing-upgrade-495276"
	I1018 10:23:14.285475  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:14.315377  434147 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:14.315805  434147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1018 10:23:14.315815  434147 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-495276 && echo "missing-upgrade-495276" | sudo tee /etc/hostname
	I1018 10:23:14.316475  434147 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50812->127.0.0.1:33363: read: connection reset by peer
	I1018 10:23:17.508546  434147 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-495276
	
	I1018 10:23:17.508627  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:17.537488  434147 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:17.537912  434147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1018 10:23:17.537928  434147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-495276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-495276/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-495276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:23:17.697725  434147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:23:17.697741  434147 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:23:17.697797  434147 ubuntu.go:177] setting up certificates
	I1018 10:23:17.697806  434147 provision.go:83] configureAuth start
	I1018 10:23:17.697881  434147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-495276
	I1018 10:23:17.725770  434147 provision.go:138] copyHostCerts
	I1018 10:23:17.725832  434147 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:23:17.725839  434147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:23:17.725926  434147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:23:17.726027  434147 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:23:17.726031  434147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:23:17.726056  434147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:23:17.726112  434147 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:23:17.726115  434147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:23:17.726138  434147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:23:17.726189  434147 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-495276 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-495276]
	I1018 10:23:18.101135  434147 provision.go:172] copyRemoteCerts
	I1018 10:23:18.101220  434147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:23:18.101263  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:18.131868  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:18.243049  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:23:18.295367  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1018 10:23:18.329121  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 10:23:18.356991  434147 provision.go:86] duration metric: configureAuth took 659.172436ms
	I1018 10:23:18.357009  434147 ubuntu.go:193] setting minikube options for container-runtime
	I1018 10:23:18.357209  434147 config.go:182] Loaded profile config "missing-upgrade-495276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 10:23:18.357316  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:18.383093  434147 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:18.383502  434147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1018 10:23:18.383515  434147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:23:18.786402  434147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:23:18.786417  434147 machine.go:91] provisioned docker machine in 4.501018512s
	I1018 10:23:18.786425  434147 client.go:171] LocalClient.Create took 11.69425436s
	I1018 10:23:18.786436  434147 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-495276" took 11.694289757s
	I1018 10:23:18.786443  434147 start.go:300] post-start starting for "missing-upgrade-495276" (driver="docker")
	I1018 10:23:18.786451  434147 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:23:18.786510  434147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:23:18.786551  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:18.821625  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:18.923512  434147 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:23:18.927254  434147 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:23:18.927286  434147 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1018 10:23:18.927295  434147 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1018 10:23:18.927302  434147 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1018 10:23:18.927314  434147 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:23:18.927372  434147 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:23:18.927445  434147 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:23:18.927542  434147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:23:18.936803  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:23:18.978511  434147 start.go:303] post-start completed in 192.054785ms
	I1018 10:23:18.978871  434147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-495276
	I1018 10:23:19.003371  434147 profile.go:148] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/config.json ...
	I1018 10:23:19.003653  434147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:23:19.003693  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:19.036373  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:19.139099  434147 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:23:19.156696  434147 start.go:128] duration metric: createHost completed in 12.068489431s
	I1018 10:23:19.156712  434147 start.go:83] releasing machines lock for "missing-upgrade-495276", held for 12.068605458s
	I1018 10:23:19.156823  434147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-495276
	I1018 10:23:19.187507  434147 ssh_runner.go:195] Run: cat /version.json
	I1018 10:23:19.187562  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:19.187871  434147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:23:19.187923  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:19.242407  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:19.383980  434303 node_ready.go:49] node "pause-019243" is "Ready"
	I1018 10:23:19.384008  434303 node_ready.go:38] duration metric: took 7.938100191s for node "pause-019243" to be "Ready" ...
	I1018 10:23:19.384022  434303 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:23:19.384084  434303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:23:19.414807  434303 api_server.go:72] duration metric: took 8.167155242s to wait for apiserver process to appear ...
	I1018 10:23:19.414829  434303 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:23:19.414848  434303 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:23:19.493026  434303 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 10:23:19.493058  434303 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 10:23:19.915645  434303 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:23:19.925818  434303 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 10:23:19.925954  434303 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 10:23:19.251217  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:19.345529  434147 ssh_runner.go:195] Run: systemctl --version
	I1018 10:23:19.541895  434147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:23:19.703985  434147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1018 10:23:19.710943  434147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:23:19.751231  434147 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1018 10:23:19.751298  434147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:23:19.806647  434147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1018 10:23:19.806660  434147 start.go:472] detecting cgroup driver to use...
	I1018 10:23:19.806704  434147 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1018 10:23:19.806755  434147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:23:19.829946  434147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:23:19.853364  434147 docker.go:203] disabling cri-docker service (if available) ...
	I1018 10:23:19.853415  434147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:23:19.869674  434147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:23:19.897784  434147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:23:20.016707  434147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:23:20.170163  434147 docker.go:219] disabling docker service ...
	I1018 10:23:20.170219  434147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:23:20.198172  434147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:23:20.213245  434147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:23:20.320193  434147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:23:20.415101  434147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:23:20.435825  434147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:23:20.458821  434147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 10:23:20.458869  434147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:20.474433  434147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:23:20.474493  434147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:20.486287  434147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:20.496359  434147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:20.506559  434147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:23:20.515748  434147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:23:20.524508  434147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:23:20.533426  434147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:23:20.622982  434147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:23:20.734705  434147 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:23:20.734769  434147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:23:20.738187  434147 start.go:540] Will wait 60s for crictl version
	I1018 10:23:20.738237  434147 ssh_runner.go:195] Run: which crictl
	I1018 10:23:20.741657  434147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 10:23:20.783804  434147 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1018 10:23:20.783891  434147 ssh_runner.go:195] Run: crio --version
	I1018 10:23:20.830708  434147 ssh_runner.go:195] Run: crio --version
	I1018 10:23:20.880642  434147 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1018 10:23:20.883489  434147 cli_runner.go:164] Run: docker network inspect missing-upgrade-495276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:23:20.899626  434147 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:23:20.903340  434147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:23:20.914383  434147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 10:23:20.914449  434147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:23:20.982397  434147 crio.go:496] all images are preloaded for cri-o runtime.
	I1018 10:23:20.982410  434147 crio.go:415] Images already preloaded, skipping extraction
	I1018 10:23:20.982465  434147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:23:21.019956  434147 crio.go:496] all images are preloaded for cri-o runtime.
	I1018 10:23:21.019969  434147 cache_images.go:84] Images are preloaded, skipping loading
	I1018 10:23:21.020057  434147 ssh_runner.go:195] Run: crio config
	I1018 10:23:21.071827  434147 cni.go:84] Creating CNI manager for ""
	I1018 10:23:21.071839  434147 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:23:21.071859  434147 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1018 10:23:21.071878  434147 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-495276 NodeName:missing-upgrade-495276 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:23:21.072065  434147 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-495276"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:23:21.072128  434147 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=missing-upgrade-495276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-495276 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1018 10:23:21.072190  434147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1018 10:23:21.081254  434147 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:23:21.081338  434147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:23:21.090406  434147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1018 10:23:21.109087  434147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:23:21.127041  434147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1018 10:23:21.144673  434147 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:23:21.148293  434147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:23:21.158782  434147 certs.go:56] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276 for IP: 192.168.85.2
	I1018 10:23:21.158804  434147 certs.go:190] acquiring lock for shared ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:21.158943  434147 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:23:21.158995  434147 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:23:21.159049  434147 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.key
	I1018 10:23:21.159057  434147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.crt with IP's: []
	I1018 10:23:22.061719  434147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.crt ...
	I1018 10:23:22.061736  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.crt: {Name:mk7abe6f2762aafb2a9e0f65218c8aed848c4c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:22.061944  434147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.key ...
	I1018 10:23:22.061952  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.key: {Name:mkbdf54a8eb2c7282f7ef45516193ef28a1f72e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:22.062043  434147 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key.43b9df8c
	I1018 10:23:22.062055  434147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1018 10:23:22.478507  434147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt.43b9df8c ...
	I1018 10:23:22.478522  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt.43b9df8c: {Name:mkc47a4d6191450eca19682adbdce8135345fec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:22.478703  434147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key.43b9df8c ...
	I1018 10:23:22.478711  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key.43b9df8c: {Name:mk29ee8b6929dea219bc8b18df203fee686da6e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:22.478797  434147 certs.go:337] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt
	I1018 10:23:22.478872  434147 certs.go:341] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key
	I1018 10:23:22.478920  434147 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.key
	I1018 10:23:22.478930  434147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.crt with IP's: []
	I1018 10:23:23.433916  434147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.crt ...
	I1018 10:23:23.433935  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.crt: {Name:mk619359d4957aace0b25da816bf2333104ceaab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:23.434139  434147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.key ...
	I1018 10:23:23.434147  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.key: {Name:mk744fcc9cd750e42a6c30f1682e066cd2d67c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:23.434379  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:23:23.434421  434147 certs.go:433] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:23:23.434429  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:23:23.434457  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:23:23.434480  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:23:23.434505  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:23:23.434554  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:23:23.435194  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1018 10:23:23.463846  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 10:23:23.489099  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:23:23.515740  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:23:23.555259  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:23:23.582181  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:23:23.607347  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:23:23.631757  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:23:23.657379  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:23:23.683723  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:23:23.708229  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:23:23.733550  434147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:23:23.752500  434147 ssh_runner.go:195] Run: openssl version
	I1018 10:23:23.758479  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:23:23.768021  434147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:23:23.771886  434147 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:23:23.771942  434147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:23:23.779251  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:23:23.789341  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:23:23.799877  434147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:23:23.804199  434147 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:23:23.804286  434147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:23:23.811920  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:23:23.821770  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:23:23.831269  434147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:23.835167  434147 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:23.835225  434147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:23.842337  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:23:23.852510  434147 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1018 10:23:23.856167  434147 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1018 10:23:23.856216  434147 kubeadm.go:404] StartCluster: {Name:missing-upgrade-495276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-495276 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1018 10:23:23.856283  434147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:23:23.856338  434147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:23:23.894426  434147 cri.go:89] found id: ""
	I1018 10:23:23.894511  434147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:23:23.903810  434147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:23:23.912978  434147 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:23:23.913036  434147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:23:23.922303  434147 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:23:23.922337  434147 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:23:20.415835  434303 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:23:20.424606  434303 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 10:23:20.425785  434303 api_server.go:141] control plane version: v1.34.1
	I1018 10:23:20.425806  434303 api_server.go:131] duration metric: took 1.010970275s to wait for apiserver health ...
	I1018 10:23:20.425815  434303 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:23:20.429119  434303 system_pods.go:59] 7 kube-system pods found
	I1018 10:23:20.429152  434303 system_pods.go:61] "coredns-66bc5c9577-wzfbh" [6de6d24e-a83f-44a4-b857-2dfe3762f0ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:23:20.429164  434303 system_pods.go:61] "etcd-pause-019243" [641526b8-c065-4e61-9b44-9e121e29d662] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:23:20.429171  434303 system_pods.go:61] "kindnet-9p267" [77f6445a-e9ac-4649-97a9-01e4119993f6] Running
	I1018 10:23:20.429178  434303 system_pods.go:61] "kube-apiserver-pause-019243" [de398340-a205-4697-91b8-bcee5807c22a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:23:20.429286  434303 system_pods.go:61] "kube-controller-manager-pause-019243" [8c88d47c-5404-4458-b28c-b6785c57b652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:23:20.429293  434303 system_pods.go:61] "kube-proxy-9ph8v" [cffc1e4e-0867-497b-9adf-a0e9b98374b5] Running
	I1018 10:23:20.429299  434303 system_pods.go:61] "kube-scheduler-pause-019243" [f053abc5-7513-4b80-aac1-61ac9edc79a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:23:20.429305  434303 system_pods.go:74] duration metric: took 3.473119ms to wait for pod list to return data ...
	I1018 10:23:20.429320  434303 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:23:20.432519  434303 default_sa.go:45] found service account: "default"
	I1018 10:23:20.432538  434303 default_sa.go:55] duration metric: took 3.212796ms for default service account to be created ...
	I1018 10:23:20.432547  434303 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:23:20.438014  434303 system_pods.go:86] 7 kube-system pods found
	I1018 10:23:20.438048  434303 system_pods.go:89] "coredns-66bc5c9577-wzfbh" [6de6d24e-a83f-44a4-b857-2dfe3762f0ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:23:20.438057  434303 system_pods.go:89] "etcd-pause-019243" [641526b8-c065-4e61-9b44-9e121e29d662] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:23:20.438064  434303 system_pods.go:89] "kindnet-9p267" [77f6445a-e9ac-4649-97a9-01e4119993f6] Running
	I1018 10:23:20.438070  434303 system_pods.go:89] "kube-apiserver-pause-019243" [de398340-a205-4697-91b8-bcee5807c22a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:23:20.438077  434303 system_pods.go:89] "kube-controller-manager-pause-019243" [8c88d47c-5404-4458-b28c-b6785c57b652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:23:20.438081  434303 system_pods.go:89] "kube-proxy-9ph8v" [cffc1e4e-0867-497b-9adf-a0e9b98374b5] Running
	I1018 10:23:20.438087  434303 system_pods.go:89] "kube-scheduler-pause-019243" [f053abc5-7513-4b80-aac1-61ac9edc79a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:23:20.438093  434303 system_pods.go:126] duration metric: took 5.540805ms to wait for k8s-apps to be running ...
	I1018 10:23:20.438102  434303 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:23:20.438154  434303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:23:20.454351  434303 system_svc.go:56] duration metric: took 16.232823ms WaitForService to wait for kubelet
	I1018 10:23:20.454385  434303 kubeadm.go:586] duration metric: took 9.206735097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:23:20.454405  434303 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:23:20.457865  434303 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:23:20.457910  434303 node_conditions.go:123] node cpu capacity is 2
	I1018 10:23:20.457926  434303 node_conditions.go:105] duration metric: took 3.515842ms to run NodePressure ...
	I1018 10:23:20.457938  434303 start.go:241] waiting for startup goroutines ...
	I1018 10:23:20.457945  434303 start.go:246] waiting for cluster config update ...
	I1018 10:23:20.457957  434303 start.go:255] writing updated cluster config ...
	I1018 10:23:20.458285  434303 ssh_runner.go:195] Run: rm -f paused
	I1018 10:23:20.463687  434303 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:23:20.464572  434303 kapi.go:59] client config for pause-019243: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.key", CAFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 10:23:20.472859  434303 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wzfbh" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 10:23:22.485623  434303 pod_ready.go:104] pod "coredns-66bc5c9577-wzfbh" is not "Ready", error: <nil>
	W1018 10:23:24.979125  434303 pod_ready.go:104] pod "coredns-66bc5c9577-wzfbh" is not "Ready", error: <nil>
	I1018 10:23:24.276754  434147 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1018 10:23:24.277097  434147 kubeadm.go:322] [preflight] Running pre-flight checks
	I1018 10:23:24.331543  434147 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:23:24.331602  434147 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:23:24.331634  434147 kubeadm.go:322] OS: Linux
	I1018 10:23:24.331676  434147 kubeadm.go:322] CGROUPS_CPU: enabled
	I1018 10:23:24.331720  434147 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1018 10:23:24.331763  434147 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1018 10:23:24.331807  434147 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1018 10:23:24.331851  434147 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1018 10:23:24.331895  434147 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1018 10:23:24.331936  434147 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1018 10:23:24.331980  434147 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1018 10:23:24.332022  434147 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1018 10:23:24.923037  434147 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:23:24.923242  434147 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:23:24.923347  434147 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 10:23:25.194726  434147 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:23:25.197814  434147 out.go:204]   - Generating certificates and keys ...
	I1018 10:23:25.198003  434147 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1018 10:23:25.201507  434147 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1018 10:23:25.951742  434147 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:23:26.153370  434147 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:23:26.509977  434147 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:23:27.029782  434147 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1018 10:23:27.364560  434147 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1018 10:23:27.365007  434147 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-495276] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:23:28.021913  434147 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1018 10:23:28.022281  434147 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-495276] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:23:28.447855  434147 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:23:29.025357  434147 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:23:25.479101  434303 pod_ready.go:94] pod "coredns-66bc5c9577-wzfbh" is "Ready"
	I1018 10:23:25.479124  434303 pod_ready.go:86] duration metric: took 5.006230461s for pod "coredns-66bc5c9577-wzfbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:25.482098  434303 pod_ready.go:83] waiting for pod "etcd-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 10:23:27.493113  434303 pod_ready.go:104] pod "etcd-pause-019243" is not "Ready", error: <nil>
	W1018 10:23:29.987155  434303 pod_ready.go:104] pod "etcd-pause-019243" is not "Ready", error: <nil>
	I1018 10:23:30.988347  434303 pod_ready.go:94] pod "etcd-pause-019243" is "Ready"
	I1018 10:23:30.988372  434303 pod_ready.go:86] duration metric: took 5.506252831s for pod "etcd-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:30.994821  434303 pod_ready.go:83] waiting for pod "kube-apiserver-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.000428  434303 pod_ready.go:94] pod "kube-apiserver-pause-019243" is "Ready"
	I1018 10:23:31.000449  434303 pod_ready.go:86] duration metric: took 5.604438ms for pod "kube-apiserver-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.003226  434303 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.008713  434303 pod_ready.go:94] pod "kube-controller-manager-pause-019243" is "Ready"
	I1018 10:23:31.008789  434303 pod_ready.go:86] duration metric: took 5.542392ms for pod "kube-controller-manager-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.011441  434303 pod_ready.go:83] waiting for pod "kube-proxy-9ph8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.185524  434303 pod_ready.go:94] pod "kube-proxy-9ph8v" is "Ready"
	I1018 10:23:31.185593  434303 pod_ready.go:86] duration metric: took 174.124884ms for pod "kube-proxy-9ph8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.385495  434303 pod_ready.go:83] waiting for pod "kube-scheduler-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.791265  434303 pod_ready.go:94] pod "kube-scheduler-pause-019243" is "Ready"
	I1018 10:23:31.791297  434303 pod_ready.go:86] duration metric: took 405.734612ms for pod "kube-scheduler-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.791310  434303 pod_ready.go:40] duration metric: took 11.327581601s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:23:31.868526  434303 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:23:31.871806  434303 out.go:179] * Done! kubectl is now configured to use "pause-019243" cluster and "default" namespace by default
	I1018 10:23:29.886097  434147 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1018 10:23:29.886387  434147 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:23:30.376511  434147 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:23:30.693630  434147 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:23:31.302421  434147 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:23:31.875177  434147 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:23:31.876166  434147 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:23:31.878826  434147 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:23:31.882105  434147 out.go:204]   - Booting up control plane ...
	I1018 10:23:31.882271  434147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:23:31.882512  434147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:23:31.886032  434147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:23:31.898367  434147 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:23:31.899012  434147 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:23:31.899261  434147 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1018 10:23:32.057808  434147 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.221994551Z" level=info msg="Starting container: e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333" id=7a584b17-0fa6-4c31-b8a5-c21e4a4c6f1b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.222029251Z" level=info msg="Starting container: 06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1" id=726c0a26-0400-40ce-bd89-1c961fab11c9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.231929122Z" level=info msg="Started container" PID=2399 containerID=06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1 description=kube-system/kube-scheduler-pause-019243/kube-scheduler id=726c0a26-0400-40ce-bd89-1c961fab11c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c7cf07ac8cc00bd65773914c0557f9b13031b47576090d13d6e848898988561
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.23773472Z" level=info msg="Started container" PID=2398 containerID=e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333 description=kube-system/kube-apiserver-pause-019243/kube-apiserver id=7a584b17-0fa6-4c31-b8a5-c21e4a4c6f1b name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bd630ccfaf8403b818efdb20229bfd413e6d1ef3d74d59aa896ce93febbc4df
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.241394395Z" level=info msg="Created container d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93: kube-system/kube-controller-manager-pause-019243/kube-controller-manager" id=3909bf18-7adc-458c-ab09-49641386e8e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.245620648Z" level=info msg="Starting container: d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93" id=674f2fd1-8b9a-43ba-9352-ddcbf96e2e67 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.273918893Z" level=info msg="Started container" PID=2410 containerID=d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93 description=kube-system/kube-controller-manager-pause-019243/kube-controller-manager id=674f2fd1-8b9a-43ba-9352-ddcbf96e2e67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0933f35c7907f4e18c22625be2aa68656352f9201338e9484e5e6349d9f95151
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.284033975Z" level=info msg="Created container e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18: kube-system/etcd-pause-019243/etcd" id=79d2b93b-4f1d-4195-9f39-7eaf67d6104b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.286360326Z" level=info msg="Starting container: e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18" id=0ed49bc9-ba36-4df1-ba4f-c070a5b83d83 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.304822847Z" level=info msg="Started container" PID=2403 containerID=e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18 description=kube-system/etcd-pause-019243/etcd id=0ed49bc9-ba36-4df1-ba4f-c070a5b83d83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5828504b69145c55a494e06aaf34bba138c29b377522216f14394718a5db16ab
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.525959114Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.529905214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.52994793Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.529968336Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.534496373Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.534541706Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.534562736Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.53922326Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.539259986Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.539279744Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.546252475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.546297636Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.546319872Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.554221Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.554254928Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d7c9eaf75f0a5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   23 seconds ago       Running             kube-controller-manager   1                   0933f35c7907f       kube-controller-manager-pause-019243   kube-system
	e4701015deca0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   23 seconds ago       Running             etcd                      1                   5828504b69145       etcd-pause-019243                      kube-system
	06d43ac35777a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   23 seconds ago       Running             kube-scheduler            1                   5c7cf07ac8cc0       kube-scheduler-pause-019243            kube-system
	e9fc519ce7871       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   23 seconds ago       Running             kube-apiserver            1                   1bd630ccfaf84       kube-apiserver-pause-019243            kube-system
	15128da41ead8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   badf1df91e20d       coredns-66bc5c9577-wzfbh               kube-system
	09054465f840b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   23 seconds ago       Running             kindnet-cni               1                   edc7da5e59966       kindnet-9p267                          kube-system
	d28ad7321d63b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   23 seconds ago       Running             kube-proxy                1                   e144ff6b3a6f1       kube-proxy-9ph8v                       kube-system
	0ae979551b18e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   38 seconds ago       Exited              coredns                   0                   badf1df91e20d       coredns-66bc5c9577-wzfbh               kube-system
	006738fd96b2a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   e144ff6b3a6f1       kube-proxy-9ph8v                       kube-system
	3cc277a3092b1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   edc7da5e59966       kindnet-9p267                          kube-system
	8600f9e890592       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   5c7cf07ac8cc0       kube-scheduler-pause-019243            kube-system
	888e1c745ae86       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   1bd630ccfaf84       kube-apiserver-pause-019243            kube-system
	b0c8a1278a6d6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   5828504b69145       etcd-pause-019243                      kube-system
	d9c28587d4861       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   0933f35c7907f       kube-controller-manager-pause-019243   kube-system
	
	
	==> coredns [0ae979551b18ee12476387ea61edbf996097504d4837b5af82bca211b75cbe5c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46246 - 20172 "HINFO IN 2630737520422832076.2396624382892307287. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020162302s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [15128da41ead8115fa2f84a7672dd4abe119002449d59f70960273bd6e459027] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47169 - 10573 "HINFO IN 7964605365171995352.3829374612193345895. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.054716163s
	
	
	==> describe nodes <==
	Name:               pause-019243
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-019243
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=pause-019243
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_22_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:22:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-019243
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:23:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:23:30 +0000   Sat, 18 Oct 2025 10:21:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:23:30 +0000   Sat, 18 Oct 2025 10:21:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:23:30 +0000   Sat, 18 Oct 2025 10:21:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:23:30 +0000   Sat, 18 Oct 2025 10:22:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-019243
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6b383227-2406-4550-a6ab-0b7bf44092aa
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-wzfbh                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     83s
	  kube-system                 etcd-pause-019243                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         91s
	  kube-system                 kindnet-9p267                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-pause-019243             250m (12%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-controller-manager-pause-019243    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-9ph8v                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-pause-019243             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 79s                  kube-proxy       
	  Normal   Starting                 17s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  100s (x8 over 101s)  kubelet          Node pause-019243 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s (x8 over 101s)  kubelet          Node pause-019243 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s (x8 over 101s)  kubelet          Node pause-019243 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 88s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s                  kubelet          Node pause-019243 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s                  kubelet          Node pause-019243 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s                  kubelet          Node pause-019243 status is now: NodeHasSufficientPID
	  Normal   Starting                 88s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           84s                  node-controller  Node pause-019243 event: Registered Node pause-019243 in Controller
	  Normal   NodeReady                39s                  kubelet          Node pause-019243 status is now: NodeReady
	  Warning  ContainerGCFailed        28s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           14s                  node-controller  Node pause-019243 event: Registered Node pause-019243 in Controller
	
	
	==> dmesg <==
	[Oct18 09:58] overlayfs: idmapped layers are currently not supported
	[  +3.833371] overlayfs: idmapped layers are currently not supported
	[Oct18 10:00] overlayfs: idmapped layers are currently not supported
	[Oct18 10:01] overlayfs: idmapped layers are currently not supported
	[Oct18 10:02] overlayfs: idmapped layers are currently not supported
	[  +3.752225] overlayfs: idmapped layers are currently not supported
	[Oct18 10:03] overlayfs: idmapped layers are currently not supported
	[ +25.695966] overlayfs: idmapped layers are currently not supported
	[Oct18 10:05] overlayfs: idmapped layers are currently not supported
	[Oct18 10:10] overlayfs: idmapped layers are currently not supported
	[ +35.463301] overlayfs: idmapped layers are currently not supported
	[Oct18 10:11] overlayfs: idmapped layers are currently not supported
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5] <==
	{"level":"warn","ts":"2025-10-18T10:22:02.209752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.236453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.287104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.302065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.349712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.393931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.608372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45872","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T10:23:01.968358Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T10:23:01.968409Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-019243","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-18T10:23:01.968505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T10:23:01.968560Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-18T10:23:02.533159Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T10:23:02.533252Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T10:23:02.533261Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T10:23:02.533325Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T10:23:02.533335Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T10:23:02.533340Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-18T10:23:02.533356Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T10:23:02.533375Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-18T10:23:02.533502Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T10:23:02.533532Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T10:23:02.536744Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-18T10:23:02.536823Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T10:23:02.536859Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T10:23:02.536866Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-019243","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18] <==
	{"level":"warn","ts":"2025-10-18T10:23:17.264354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.294129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.313596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.401417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.411122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.445465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.463784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.506107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.559677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.610154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.694537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.739590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.765745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.794901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.816281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.842230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.869893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.896104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.945921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.978208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.024834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.057696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.093105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.117239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.195837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45078","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:23:36 up  2:06,  0 user,  load average: 3.05, 2.34, 2.07
	Linux pause-019243 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [09054465f840b5bd38d7d4516d56642a1c6df2c6eb394ff6de2428c47a2a957d] <==
	I1018 10:23:13.252597       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:23:13.253629       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:23:13.253785       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:23:13.253798       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:23:13.253811       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:23:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:23:13.527352       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:23:13.527508       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:23:13.527664       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:23:13.531188       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 10:23:19.429844       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:23:19.429960       1 metrics.go:72] Registering metrics
	I1018 10:23:19.430096       1 controller.go:711] "Syncing nftables rules"
	I1018 10:23:23.525473       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:23:23.525631       1 main.go:301] handling current node
	I1018 10:23:33.524638       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:23:33.524697       1 main.go:301] handling current node
	
	
	==> kindnet [3cc277a3092b1996e080de34cfec6f38d30c32c1fb580882942cb48454483741] <==
	I1018 10:22:16.237483       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:22:16.237878       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:22:16.249360       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:22:16.249450       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:22:16.249490       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:22:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:22:16.458074       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:22:16.458266       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:22:16.458321       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:22:16.463699       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:22:46.458415       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 10:22:46.463140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 10:22:46.463235       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:22:46.464738       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 10:22:47.962694       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:22:47.962817       1 metrics.go:72] Registering metrics
	I1018 10:22:47.962910       1 controller.go:711] "Syncing nftables rules"
	I1018 10:22:56.461680       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:22:56.461731       1 main.go:301] handling current node
	
	
	==> kube-apiserver [888e1c745ae86edf3cfb0b8124645f0fd6da8c2376869b3e66ea6f0930abf181] <==
	W1018 10:23:01.976550       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.976624       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.976675       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1018 10:23:01.985880       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W1018 10:23:01.986149       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986319       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986444       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986597       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986703       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986846       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986957       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987057       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987189       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987299       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987456       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987679       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987793       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987940       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988056       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988185       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988289       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988443       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988634       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988780       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.994567       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333] <==
	I1018 10:23:19.324300       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:23:19.343346       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 10:23:19.343517       1 aggregator.go:171] initial CRD sync complete...
	I1018 10:23:19.343564       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 10:23:19.343629       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 10:23:19.343751       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 10:23:19.343785       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:23:19.352498       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 10:23:19.358684       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:23:19.362290       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 10:23:19.366600       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 10:23:19.366794       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:23:19.373223       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 10:23:19.373373       1 policy_source.go:240] refreshing policies
	I1018 10:23:19.404117       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:23:19.449983       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 10:23:19.453674       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:23:19.453955       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1018 10:23:19.493106       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 10:23:19.955080       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:23:21.273645       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:23:22.765717       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:23:22.792028       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:23:22.840280       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:23:22.993217       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93] <==
	I1018 10:23:22.738690       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 10:23:22.742381       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:23:22.744507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 10:23:22.744669       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:23:22.749383       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 10:23:22.749785       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:23:22.751077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 10:23:22.751170       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 10:23:22.751223       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 10:23:22.756832       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:23:22.757747       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:23:22.759864       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 10:23:22.775745       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:23:22.783325       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:23:22.783595       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 10:23:22.783687       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 10:23:22.784739       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 10:23:22.786881       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:23:22.787062       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 10:23:22.788111       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 10:23:22.789491       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 10:23:22.789564       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 10:23:22.799576       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:23:22.803942       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:23:22.808406       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742] <==
	I1018 10:22:12.633726       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 10:22:12.642279       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:22:12.661904       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 10:22:12.662160       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 10:22:12.662354       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:22:12.662513       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 10:22:12.669981       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 10:22:12.670089       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 10:22:12.670118       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 10:22:12.670311       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 10:22:12.673345       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:22:12.678289       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:22:12.678388       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 10:22:12.705252       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-019243" podCIDRs=["10.244.0.0/24"]
	I1018 10:22:12.762258       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:22:12.789448       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:22:12.798658       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:22:12.799103       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:22:12.819579       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 10:22:12.819841       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 10:22:12.969732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:22:13.013419       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:22:13.013515       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:22:13.013545       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:22:57.620189       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [006738fd96b2a20ec03049da106472c554433b20145062583aebec83cb373d89] <==
	I1018 10:22:16.477088       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:22:16.735036       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:22:16.836064       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:22:16.836102       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:22:16.836178       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:22:16.900274       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:22:16.900409       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:22:16.912233       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:22:16.918642       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:22:16.918980       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:22:16.920424       1 config.go:200] "Starting service config controller"
	I1018 10:22:16.920486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:22:16.920535       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:22:16.920563       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:22:16.920604       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:22:16.920643       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:22:16.927005       1 config.go:309] "Starting node config controller"
	I1018 10:22:16.927094       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:22:16.927124       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:22:17.021602       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:22:17.021611       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 10:22:17.021630       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d28ad7321d63b96d8f407c66665665893126b46543ebff1b3dbf9af6d6c2dfa7] <==
	I1018 10:23:16.317455       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:23:17.425796       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:23:19.463505       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:23:19.463613       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:23:19.463725       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:23:19.591139       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:23:19.591192       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:23:19.623595       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:23:19.623933       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:23:19.623988       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:23:19.625420       1 config.go:200] "Starting service config controller"
	I1018 10:23:19.625498       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:23:19.625716       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:23:19.625788       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:23:19.625839       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:23:19.625867       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:23:19.626529       1 config.go:309] "Starting node config controller"
	I1018 10:23:19.626582       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:23:19.626610       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:23:19.735317       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:23:19.735455       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 10:23:19.735558       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1] <==
	I1018 10:23:16.511920       1 serving.go:386] Generated self-signed cert in-memory
	W1018 10:23:19.214400       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 10:23:19.214526       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 10:23:19.214562       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 10:23:19.214626       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 10:23:19.435481       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:23:19.444062       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:23:19.459055       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:23:19.459166       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:23:19.459833       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:23:19.459935       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:23:19.559737       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [8600f9e89059224c9e5954596534b99d00dc73824984fc82abde77714c802a01] <==
	I1018 10:22:01.675667       1 serving.go:386] Generated self-signed cert in-memory
	W1018 10:22:06.714009       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 10:22:06.714126       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 10:22:06.714161       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 10:22:06.714211       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 10:22:06.775190       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:22:06.775301       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:22:06.777800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:22:06.777978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:22:06.778025       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:22:06.778076       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 10:22:06.812502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 10:22:07.778375       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:23:01.990910       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 10:23:01.996272       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 10:23:01.996361       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 10:23:01.996419       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:23:01.999437       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 10:23:01.999789       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.024816    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-wzfbh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6de6d24e-a83f-44a4-b857-2dfe3762f0ad" pod="kube-system/coredns-66bc5c9577-wzfbh"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.025339    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="756bdd1668761891cf05525b0230f65f" pod="kube-system/etcd-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.025561    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="10d0834ecec333ea59092e48efdde8b7" pod="kube-system/kube-apiserver-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: I1018 10:23:13.044414    1303 scope.go:117] "RemoveContainer" containerID="b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.044958    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="10d0834ecec333ea59092e48efdde8b7" pod="kube-system/kube-apiserver-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.045531    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="770e4b281efe92e27e8c8070495183a9" pod="kube-system/kube-scheduler-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.045836    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9p267\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="77f6445a-e9ac-4649-97a9-01e4119993f6" pod="kube-system/kindnet-9p267"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.046129    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ph8v\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cffc1e4e-0867-497b-9adf-a0e9b98374b5" pod="kube-system/kube-proxy-9ph8v"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.046395    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-wzfbh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6de6d24e-a83f-44a4-b857-2dfe3762f0ad" pod="kube-system/coredns-66bc5c9577-wzfbh"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.046761    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="756bdd1668761891cf05525b0230f65f" pod="kube-system/etcd-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.051917    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="07d6d73554a3524b20c45fc6c7fce5a6" pod="kube-system/kube-controller-manager-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.052383    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="756bdd1668761891cf05525b0230f65f" pod="kube-system/etcd-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: I1018 10:23:13.052746    1303 scope.go:117] "RemoveContainer" containerID="d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.053776    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="10d0834ecec333ea59092e48efdde8b7" pod="kube-system/kube-apiserver-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.054012    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="770e4b281efe92e27e8c8070495183a9" pod="kube-system/kube-scheduler-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.054176    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9p267\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="77f6445a-e9ac-4649-97a9-01e4119993f6" pod="kube-system/kindnet-9p267"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.054562    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ph8v\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cffc1e4e-0867-497b-9adf-a0e9b98374b5" pod="kube-system/kube-proxy-9ph8v"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.054742    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-wzfbh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6de6d24e-a83f-44a4-b857-2dfe3762f0ad" pod="kube-system/coredns-66bc5c9577-wzfbh"
	Oct 18 10:23:18 pause-019243 kubelet[1303]: W1018 10:23:18.987508    1303 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 10:23:19 pause-019243 kubelet[1303]: E1018 10:23:19.171091    1303 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-019243\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-019243' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 10:23:19 pause-019243 kubelet[1303]: E1018 10:23:19.171524    1303 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-019243\" is forbidden: User \"system:node:pause-019243\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-019243' and this object" podUID="770e4b281efe92e27e8c8070495183a9" pod="kube-system/kube-scheduler-pause-019243"
	Oct 18 10:23:19 pause-019243 kubelet[1303]: E1018 10:23:19.275196    1303 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9p267\" is forbidden: User \"system:node:pause-019243\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-019243' and this object" podUID="77f6445a-e9ac-4649-97a9-01e4119993f6" pod="kube-system/kindnet-9p267"
	Oct 18 10:23:32 pause-019243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:23:32 pause-019243 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:23:32 pause-019243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-019243 -n pause-019243
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-019243 -n pause-019243: exit status 2 (557.441744ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-019243 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-019243
helpers_test.go:243: (dbg) docker inspect pause-019243:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4",
	        "Created": "2025-10-18T10:21:38.142725934Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 426132,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:21:38.229548709Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4/hosts",
	        "LogPath": "/var/lib/docker/containers/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4/3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4-json.log",
	        "Name": "/pause-019243",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-019243:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-019243",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3902a816561ae2ae838815e122f06ce84d468c513ca43dccc358a1e3a7125fb4",
	                "LowerDir": "/var/lib/docker/overlay2/e91ce86bb9ac9a31e1e05e6b951a98cd31e0b38c4c09f267221d71fa8428eaf4-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e91ce86bb9ac9a31e1e05e6b951a98cd31e0b38c4c09f267221d71fa8428eaf4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e91ce86bb9ac9a31e1e05e6b951a98cd31e0b38c4c09f267221d71fa8428eaf4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e91ce86bb9ac9a31e1e05e6b951a98cd31e0b38c4c09f267221d71fa8428eaf4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-019243",
	                "Source": "/var/lib/docker/volumes/pause-019243/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-019243",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-019243",
	                "name.minikube.sigs.k8s.io": "pause-019243",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f2bb97a78fa915e21f95a4230b92bc400a83022bf6e9873eba72f37d35500625",
	            "SandboxKey": "/var/run/docker/netns/f2bb97a78fa9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-019243": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:a0:f0:d1:cc:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a9c844bd2f37f27a691b49d505aa949bddd3153af738f97af2bb8079b116b6a",
	                    "EndpointID": "692428ed49b6524aed89a8a22c1ab5299754ddea7022ba74e1d94dfee416e1ff",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-019243",
	                        "3902a816561a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-019243 -n pause-019243
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-019243 -n pause-019243: exit status 2 (554.072695ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-019243 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-019243 logs -n 25: (1.862852708s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-968310                                                                                                │ test-preload-968310         │ jenkins │ v1.37.0 │ 18 Oct 25 10:20 UTC │ 18 Oct 25 10:20 UTC │
	│ start   │ -p scheduled-stop-023595 --memory=3072 --driver=docker  --container-runtime=crio                                      │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:20 UTC │ 18 Oct 25 10:21 UTC │
	│ stop    │ -p scheduled-stop-023595 --schedule 5m                                                                                │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 5m                                                                                │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 5m                                                                                │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 15s                                                                               │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 15s                                                                               │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ stop    │ -p scheduled-stop-023595 --schedule 15s                                                                               │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ delete  │ -p scheduled-stop-023595                                                                                              │ scheduled-stop-023595       │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │ 18 Oct 25 10:21 UTC │
	│ start   │ -p insufficient-storage-971499 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-971499 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ delete  │ -p insufficient-storage-971499                                                                                        │ insufficient-storage-971499 │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │ 18 Oct 25 10:21 UTC │
	│ start   │ -p pause-019243 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-019243                │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p NoKubernetes-403599 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │                     │
	│ start   │ -p NoKubernetes-403599 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:21 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p NoKubernetes-403599 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ delete  │ -p NoKubernetes-403599                                                                                                │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p NoKubernetes-403599 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ ssh     │ -p NoKubernetes-403599 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │                     │
	│ stop    │ -p NoKubernetes-403599                                                                                                │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p NoKubernetes-403599 --driver=docker  --container-runtime=crio                                                      │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ ssh     │ -p NoKubernetes-403599 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │                     │
	│ delete  │ -p NoKubernetes-403599                                                                                                │ NoKubernetes-403599         │ jenkins │ v1.37.0 │ 18 Oct 25 10:22 UTC │ 18 Oct 25 10:22 UTC │
	│ start   │ -p missing-upgrade-495276 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-495276      │ jenkins │ v1.32.0 │ 18 Oct 25 10:22 UTC │                     │
	│ start   │ -p pause-019243 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-019243                │ jenkins │ v1.37.0 │ 18 Oct 25 10:23 UTC │ 18 Oct 25 10:23 UTC │
	│ pause   │ -p pause-019243 --alsologtostderr -v=5                                                                                │ pause-019243                │ jenkins │ v1.37.0 │ 18 Oct 25 10:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:23:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:23:00.055519  434303 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:23:00.057671  434303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:23:00.057686  434303 out.go:374] Setting ErrFile to fd 2...
	I1018 10:23:00.057731  434303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:23:00.058567  434303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:23:00.060214  434303 out.go:368] Setting JSON to false
	I1018 10:23:00.074037  434303 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7530,"bootTime":1760775450,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:23:00.074465  434303 start.go:141] virtualization:  
	I1018 10:23:00.090251  434303 out.go:179] * [pause-019243] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:23:00.104082  434303 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:23:00.105045  434303 notify.go:220] Checking for updates...
	I1018 10:23:00.134082  434303 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:23:00.168477  434303 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:23:00.173280  434303 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:23:00.186648  434303 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:23:00.190057  434303 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:23:00.205775  434303 config.go:182] Loaded profile config "pause-019243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:23:00.209482  434303 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:23:00.309366  434303 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:23:00.309533  434303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:23:00.413010  434303 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2025-10-18 10:23:00.395744205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:23:00.413135  434303 docker.go:318] overlay module found
	I1018 10:23:00.416962  434303 out.go:179] * Using the docker driver based on existing profile
	I1018 10:23:00.420015  434303 start.go:305] selected driver: docker
	I1018 10:23:00.420043  434303 start.go:925] validating driver "docker" against &{Name:pause-019243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:23:00.420202  434303 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:23:00.420316  434303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:23:00.540220  434303 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2025-10-18 10:23:00.519746493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:23:00.540720  434303 cni.go:84] Creating CNI manager for ""
	I1018 10:23:00.540780  434303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:23:00.540823  434303 start.go:349] cluster config:
	{Name:pause-019243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:23:00.544247  434303 out.go:179] * Starting "pause-019243" primary control-plane node in "pause-019243" cluster
	I1018 10:23:00.547306  434303 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:23:00.550945  434303 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:23:00.553833  434303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:23:00.553916  434303 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:23:00.553934  434303 cache.go:58] Caching tarball of preloaded images
	I1018 10:23:00.554027  434303 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:23:00.554041  434303 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:23:00.554195  434303 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/config.json ...
	I1018 10:23:00.554468  434303 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:23:00.586353  434303 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:23:00.586379  434303 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:23:00.586394  434303 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:23:00.586418  434303 start.go:360] acquireMachinesLock for pause-019243: {Name:mk05462e9af1aedb94ca598a536cc4d42d3c7af9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:23:00.586478  434303 start.go:364] duration metric: took 38.867µs to acquireMachinesLock for "pause-019243"
	I1018 10:23:00.586502  434303 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:23:00.586511  434303 fix.go:54] fixHost starting: 
	I1018 10:23:00.586762  434303 cli_runner.go:164] Run: docker container inspect pause-019243 --format={{.State.Status}}
	I1018 10:23:00.621299  434303 fix.go:112] recreateIfNeeded on pause-019243: state=Running err=<nil>
	W1018 10:23:00.621346  434303 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 10:23:00.624780  434303 out.go:252] * Updating the running docker "pause-019243" container ...
	I1018 10:23:00.624822  434303 machine.go:93] provisionDockerMachine start ...
	I1018 10:23:00.624916  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:00.644351  434303 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:00.644681  434303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1018 10:23:00.644696  434303 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:23:00.821702  434303 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-019243
	
	I1018 10:23:00.821794  434303 ubuntu.go:182] provisioning hostname "pause-019243"
	I1018 10:23:00.821883  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:00.843231  434303 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:00.843552  434303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1018 10:23:00.843562  434303 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-019243 && echo "pause-019243" | sudo tee /etc/hostname
	I1018 10:23:01.019782  434303 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-019243
	
	I1018 10:23:01.019929  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:01.044597  434303 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:01.045095  434303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1018 10:23:01.045617  434303 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-019243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-019243/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-019243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:23:01.214779  434303 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:23:01.214847  434303 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:23:01.214892  434303 ubuntu.go:190] setting up certificates
	I1018 10:23:01.214917  434303 provision.go:84] configureAuth start
	I1018 10:23:01.215002  434303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-019243
	I1018 10:23:01.246910  434303 provision.go:143] copyHostCerts
	I1018 10:23:01.246973  434303 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:23:01.246983  434303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:23:01.247054  434303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:23:01.247344  434303 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:23:01.247362  434303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:23:01.247612  434303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:23:01.247947  434303 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:23:01.247960  434303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:23:01.248176  434303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:23:01.248276  434303 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.pause-019243 san=[127.0.0.1 192.168.76.2 localhost minikube pause-019243]
	I1018 10:23:01.553206  434303 provision.go:177] copyRemoteCerts
	I1018 10:23:01.553568  434303 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:23:01.553642  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:01.576564  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:01.691151  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:23:01.723228  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 10:23:01.741039  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:23:01.758699  434303 provision.go:87] duration metric: took 543.744817ms to configureAuth
	I1018 10:23:01.758722  434303 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:23:01.758942  434303 config.go:182] Loaded profile config "pause-019243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:23:01.759049  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:01.779735  434303 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:01.780038  434303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33348 <nil> <nil>}
	I1018 10:23:01.780059  434303 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:23:07.087909  434147 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from cached tarball
	I1018 10:23:07.087936  434147 cache.go:194] Successfully downloaded all kic artifacts
	I1018 10:23:07.087991  434147 start.go:365] acquiring machines lock for missing-upgrade-495276: {Name:mk50b1d211b8a40bcfaec996f67525cfafb066cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:23:07.088100  434147 start.go:369] acquired machines lock for "missing-upgrade-495276" in 90.929µs
	I1018 10:23:07.088123  434147 start.go:93] Provisioning new machine with config: &{Name:missing-upgrade-495276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-495276 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:23:07.088191  434147 start.go:125] createHost starting for "" (driver="docker")
	I1018 10:23:07.091888  434147 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:23:07.092148  434147 start.go:159] libmachine.API.Create for "missing-upgrade-495276" (driver="docker")
	I1018 10:23:07.092167  434147 client.go:168] LocalClient.Create starting
	I1018 10:23:07.092227  434147 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:23:07.092259  434147 main.go:141] libmachine: Decoding PEM data...
	I1018 10:23:07.092272  434147 main.go:141] libmachine: Parsing certificate...
	I1018 10:23:07.092326  434147 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:23:07.092345  434147 main.go:141] libmachine: Decoding PEM data...
	I1018 10:23:07.092355  434147 main.go:141] libmachine: Parsing certificate...
	I1018 10:23:07.092701  434147 cli_runner.go:164] Run: docker network inspect missing-upgrade-495276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:23:07.114920  434147 cli_runner.go:211] docker network inspect missing-upgrade-495276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:23:07.114986  434147 network_create.go:281] running [docker network inspect missing-upgrade-495276] to gather additional debugging logs...
	I1018 10:23:07.115007  434147 cli_runner.go:164] Run: docker network inspect missing-upgrade-495276
	W1018 10:23:07.133293  434147 cli_runner.go:211] docker network inspect missing-upgrade-495276 returned with exit code 1
	I1018 10:23:07.133314  434147 network_create.go:284] error running [docker network inspect missing-upgrade-495276]: docker network inspect missing-upgrade-495276: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-495276 not found
	I1018 10:23:07.133325  434147 network_create.go:286] output of [docker network inspect missing-upgrade-495276]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-495276 not found
	
	** /stderr **
	I1018 10:23:07.133419  434147 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:23:07.152298  434147 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:23:07.152793  434147 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:23:07.153292  434147 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:23:07.153645  434147 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4a9c844bd2f3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:13:da:08:e1:42} reservation:<nil>}
	I1018 10:23:07.154123  434147 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025a5a20}
	I1018 10:23:07.154156  434147 network_create.go:124] attempt to create docker network missing-upgrade-495276 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 10:23:07.154210  434147 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-495276 missing-upgrade-495276
	I1018 10:23:07.218493  434147 network_create.go:108] docker network missing-upgrade-495276 192.168.85.0/24 created
	I1018 10:23:07.218522  434147 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-495276" container
	I1018 10:23:07.218600  434147 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:23:07.238929  434147 cli_runner.go:164] Run: docker volume create missing-upgrade-495276 --label name.minikube.sigs.k8s.io=missing-upgrade-495276 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:23:07.276542  434147 oci.go:103] Successfully created a docker volume missing-upgrade-495276
	I1018 10:23:07.276624  434147 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-495276-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-495276 --entrypoint /usr/bin/test -v missing-upgrade-495276:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1018 10:23:08.505237  434147 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-495276-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-495276 --entrypoint /usr/bin/test -v missing-upgrade-495276:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.228578236s)
	I1018 10:23:08.505255  434147 oci.go:107] Successfully prepared a docker volume missing-upgrade-495276
	I1018 10:23:08.505277  434147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 10:23:08.505296  434147 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 10:23:08.505369  434147 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-495276:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 10:23:07.164181  434303 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:23:07.164204  434303 machine.go:96] duration metric: took 6.539373436s to provisionDockerMachine
	I1018 10:23:07.164215  434303 start.go:293] postStartSetup for "pause-019243" (driver="docker")
	I1018 10:23:07.164226  434303 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:23:07.164298  434303 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:23:07.164343  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:07.200169  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:07.325167  434303 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:23:07.336449  434303 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:23:07.336476  434303 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:23:07.336487  434303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:23:07.336545  434303 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:23:07.336624  434303 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:23:07.336731  434303 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:23:07.345984  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:23:07.366757  434303 start.go:296] duration metric: took 202.525985ms for postStartSetup
	I1018 10:23:07.366871  434303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:23:07.366920  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:07.385337  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:07.515631  434303 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:23:07.521084  434303 fix.go:56] duration metric: took 6.93456577s for fixHost
	I1018 10:23:07.521110  434303 start.go:83] releasing machines lock for "pause-019243", held for 6.934619727s
	I1018 10:23:07.521235  434303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-019243
	I1018 10:23:07.538218  434303 ssh_runner.go:195] Run: cat /version.json
	I1018 10:23:07.538280  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:07.538532  434303 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:23:07.538586  434303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-019243
	I1018 10:23:07.556540  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:07.570730  434303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/pause-019243/id_rsa Username:docker}
	I1018 10:23:07.656825  434303 ssh_runner.go:195] Run: systemctl --version
	I1018 10:23:07.825102  434303 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:23:07.911937  434303 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:23:07.919604  434303 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:23:07.919672  434303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:23:07.931314  434303 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:23:07.931340  434303 start.go:495] detecting cgroup driver to use...
	I1018 10:23:07.931372  434303 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:23:07.931423  434303 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:23:07.948989  434303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:23:07.974722  434303 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:23:07.974829  434303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:23:07.993041  434303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:23:08.007973  434303 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:23:08.215415  434303 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:23:08.402601  434303 docker.go:234] disabling docker service ...
	I1018 10:23:08.402722  434303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:23:08.426461  434303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:23:08.451760  434303 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:23:08.685806  434303 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:23:08.874499  434303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:23:08.889757  434303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:23:08.904946  434303 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:23:08.905009  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.915277  434303 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:23:08.915340  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.924969  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.934777  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.944589  434303 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:23:08.953687  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.963618  434303 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.972588  434303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:08.987262  434303 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:23:08.995938  434303 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:23:09.004528  434303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:23:09.178925  434303 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:23:09.791005  434303 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:23:09.791084  434303 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:23:09.796393  434303 start.go:563] Will wait 60s for crictl version
	I1018 10:23:09.796534  434303 ssh_runner.go:195] Run: which crictl
	I1018 10:23:09.802862  434303 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:23:09.850312  434303 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:23:09.850478  434303 ssh_runner.go:195] Run: crio --version
	I1018 10:23:09.904797  434303 ssh_runner.go:195] Run: crio --version
	I1018 10:23:09.943545  434303 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:23:09.944822  434303 cli_runner.go:164] Run: docker network inspect pause-019243 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:23:09.968789  434303 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:23:09.975971  434303 kubeadm.go:883] updating cluster {Name:pause-019243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:23:09.976122  434303 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:23:09.976182  434303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:23:10.019327  434303 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:23:10.019358  434303 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:23:10.019421  434303 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:23:10.052539  434303 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:23:10.052567  434303 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:23:10.052577  434303 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 10:23:10.052710  434303 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-019243 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:23:10.052821  434303 ssh_runner.go:195] Run: crio config
	I1018 10:23:10.128180  434303 cni.go:84] Creating CNI manager for ""
	I1018 10:23:10.128205  434303 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:23:10.128228  434303 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:23:10.128253  434303 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-019243 NodeName:pause-019243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:23:10.128414  434303 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-019243"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:23:10.128503  434303 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:23:10.137859  434303 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:23:10.137930  434303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:23:10.148188  434303 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1018 10:23:10.162300  434303 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:23:10.175867  434303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 10:23:10.190247  434303 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:23:10.194898  434303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:23:10.374979  434303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:23:10.389621  434303 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243 for IP: 192.168.76.2
	I1018 10:23:10.389653  434303 certs.go:195] generating shared ca certs ...
	I1018 10:23:10.389669  434303 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:10.389824  434303 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:23:10.389892  434303 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:23:10.389909  434303 certs.go:257] generating profile certs ...
	I1018 10:23:10.390011  434303 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.key
	I1018 10:23:10.390096  434303 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/apiserver.key.1256d678
	I1018 10:23:10.390139  434303 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/proxy-client.key
	I1018 10:23:10.390274  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:23:10.390315  434303 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:23:10.390328  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:23:10.390353  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:23:10.390392  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:23:10.390419  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:23:10.390474  434303 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:23:10.391156  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:23:10.409287  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:23:10.426764  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:23:10.443885  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:23:10.460999  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 10:23:10.478147  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:23:10.495468  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:23:10.512755  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:23:10.530881  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:23:10.548419  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:23:10.567454  434303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:23:10.585210  434303 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:23:10.598110  434303 ssh_runner.go:195] Run: openssl version
	I1018 10:23:10.607827  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:23:10.622905  434303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:23:10.627402  434303 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:23:10.627484  434303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:23:10.670153  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:23:10.679145  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:23:10.688955  434303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:23:10.693733  434303 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:23:10.693882  434303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:23:10.741222  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:23:10.751475  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:23:10.760664  434303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:10.765331  434303 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:10.765443  434303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:10.815592  434303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:23:10.825481  434303 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:23:10.829873  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:23:10.885286  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:23:10.930542  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:23:10.982157  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:23:11.028309  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:23:11.079256  434303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:23:11.126277  434303 kubeadm.go:400] StartCluster: {Name:pause-019243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-019243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:23:11.126468  434303 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:23:11.126563  434303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:23:11.176252  434303 cri.go:89] found id: "0ae979551b18ee12476387ea61edbf996097504d4837b5af82bca211b75cbe5c"
	I1018 10:23:11.176330  434303 cri.go:89] found id: "006738fd96b2a20ec03049da106472c554433b20145062583aebec83cb373d89"
	I1018 10:23:11.176348  434303 cri.go:89] found id: "3cc277a3092b1996e080de34cfec6f38d30c32c1fb580882942cb48454483741"
	I1018 10:23:11.176368  434303 cri.go:89] found id: "8600f9e89059224c9e5954596534b99d00dc73824984fc82abde77714c802a01"
	I1018 10:23:11.176399  434303 cri.go:89] found id: "888e1c745ae86edf3cfb0b8124645f0fd6da8c2376869b3e66ea6f0930abf181"
	I1018 10:23:11.176423  434303 cri.go:89] found id: "b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5"
	I1018 10:23:11.176446  434303 cri.go:89] found id: "d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742"
	I1018 10:23:11.176464  434303 cri.go:89] found id: ""
	I1018 10:23:11.176542  434303 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:23:11.202678  434303 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:23:11Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:23:11.202820  434303 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:23:11.218641  434303 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:23:11.218712  434303 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:23:11.218791  434303 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:23:11.228045  434303 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:23:11.228714  434303 kubeconfig.go:125] found "pause-019243" server: "https://192.168.76.2:8443"
	I1018 10:23:11.229483  434303 kapi.go:59] client config for pause-019243: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.key", CAFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 10:23:11.230096  434303 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 10:23:11.230314  434303 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 10:23:11.230338  434303 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 10:23:11.230359  434303 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 10:23:11.230390  434303 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 10:23:11.230788  434303 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:23:11.246355  434303 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 10:23:11.246429  434303 kubeadm.go:601] duration metric: took 27.698107ms to restartPrimaryControlPlane
	I1018 10:23:11.246453  434303 kubeadm.go:402] duration metric: took 120.185716ms to StartCluster
	I1018 10:23:11.246496  434303 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:11.246603  434303 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:23:11.247303  434303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:11.247591  434303 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:23:11.248004  434303 config.go:182] Loaded profile config "pause-019243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:23:11.248093  434303 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:23:11.251355  434303 out.go:179] * Verifying Kubernetes components...
	I1018 10:23:11.253298  434303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:23:11.253403  434303 out.go:179] * Enabled addons: 
	I1018 10:23:12.878195  434147 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-495276:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.37278546s)
	I1018 10:23:12.878218  434147 kic.go:203] duration metric: took 4.372920 seconds to extract preloaded images to volume
	W1018 10:23:12.878364  434147 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:23:12.878458  434147 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:23:12.935590  434147 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-495276 --name missing-upgrade-495276 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-495276 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-495276 --network missing-upgrade-495276 --ip 192.168.85.2 --volume missing-upgrade-495276:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1018 10:23:13.396026  434147 cli_runner.go:164] Run: docker container inspect missing-upgrade-495276 --format={{.State.Running}}
	I1018 10:23:13.426676  434147 cli_runner.go:164] Run: docker container inspect missing-upgrade-495276 --format={{.State.Status}}
	I1018 10:23:13.449474  434147 cli_runner.go:164] Run: docker exec missing-upgrade-495276 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:23:13.522041  434147 oci.go:144] the created container "missing-upgrade-495276" has a running status.
	I1018 10:23:13.522061  434147 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa...
	I1018 10:23:14.100306  434147 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:23:14.132242  434147 cli_runner.go:164] Run: docker container inspect missing-upgrade-495276 --format={{.State.Status}}
	I1018 10:23:14.161969  434147 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:23:14.161981  434147 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-495276 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:23:11.254544  434303 addons.go:514] duration metric: took 6.434015ms for enable addons: enabled=[]
	I1018 10:23:11.429765  434303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:23:11.445879  434303 node_ready.go:35] waiting up to 6m0s for node "pause-019243" to be "Ready" ...
	W1018 10:23:13.447715  434303 node_ready.go:55] error getting node "pause-019243" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/pause-019243": dial tcp 192.168.76.2:8443: connect: connection refused
	I1018 10:23:14.257536  434147 cli_runner.go:164] Run: docker container inspect missing-upgrade-495276 --format={{.State.Status}}
	I1018 10:23:14.285387  434147 machine.go:88] provisioning docker machine ...
	I1018 10:23:14.285408  434147 ubuntu.go:169] provisioning hostname "missing-upgrade-495276"
	I1018 10:23:14.285475  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:14.315377  434147 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:14.315805  434147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1018 10:23:14.315815  434147 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-495276 && echo "missing-upgrade-495276" | sudo tee /etc/hostname
	I1018 10:23:14.316475  434147 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50812->127.0.0.1:33363: read: connection reset by peer
	I1018 10:23:17.508546  434147 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-495276
	
	I1018 10:23:17.508627  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:17.537488  434147 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:17.537912  434147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1018 10:23:17.537928  434147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-495276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-495276/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-495276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:23:17.697725  434147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:23:17.697741  434147 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:23:17.697797  434147 ubuntu.go:177] setting up certificates
	I1018 10:23:17.697806  434147 provision.go:83] configureAuth start
	I1018 10:23:17.697881  434147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-495276
	I1018 10:23:17.725770  434147 provision.go:138] copyHostCerts
	I1018 10:23:17.725832  434147 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:23:17.725839  434147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:23:17.725926  434147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:23:17.726027  434147 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:23:17.726031  434147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:23:17.726056  434147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:23:17.726112  434147 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:23:17.726115  434147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:23:17.726138  434147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:23:17.726189  434147 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-495276 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-495276]
	I1018 10:23:18.101135  434147 provision.go:172] copyRemoteCerts
	I1018 10:23:18.101220  434147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:23:18.101263  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:18.131868  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:18.243049  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:23:18.295367  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1018 10:23:18.329121  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 10:23:18.356991  434147 provision.go:86] duration metric: configureAuth took 659.172436ms
	I1018 10:23:18.357009  434147 ubuntu.go:193] setting minikube options for container-runtime
	I1018 10:23:18.357209  434147 config.go:182] Loaded profile config "missing-upgrade-495276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 10:23:18.357316  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:18.383093  434147 main.go:141] libmachine: Using SSH client type: native
	I1018 10:23:18.383502  434147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bdf40] 0x3c06b0 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1018 10:23:18.383515  434147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:23:18.786402  434147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:23:18.786417  434147 machine.go:91] provisioned docker machine in 4.501018512s
	I1018 10:23:18.786425  434147 client.go:171] LocalClient.Create took 11.69425436s
	I1018 10:23:18.786436  434147 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-495276" took 11.694289757s
	I1018 10:23:18.786443  434147 start.go:300] post-start starting for "missing-upgrade-495276" (driver="docker")
	I1018 10:23:18.786451  434147 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:23:18.786510  434147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:23:18.786551  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:18.821625  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:18.923512  434147 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:23:18.927254  434147 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:23:18.927286  434147 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1018 10:23:18.927295  434147 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1018 10:23:18.927302  434147 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1018 10:23:18.927314  434147 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:23:18.927372  434147 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:23:18.927445  434147 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:23:18.927542  434147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:23:18.936803  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:23:18.978511  434147 start.go:303] post-start completed in 192.054785ms
	I1018 10:23:18.978871  434147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-495276
	I1018 10:23:19.003371  434147 profile.go:148] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/config.json ...
	I1018 10:23:19.003653  434147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:23:19.003693  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:19.036373  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:19.139099  434147 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:23:19.156696  434147 start.go:128] duration metric: createHost completed in 12.068489431s
	I1018 10:23:19.156712  434147 start.go:83] releasing machines lock for "missing-upgrade-495276", held for 12.068605458s
	I1018 10:23:19.156823  434147 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-495276
	I1018 10:23:19.187507  434147 ssh_runner.go:195] Run: cat /version.json
	I1018 10:23:19.187562  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:19.187871  434147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:23:19.187923  434147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-495276
	I1018 10:23:19.242407  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:19.383980  434303 node_ready.go:49] node "pause-019243" is "Ready"
	I1018 10:23:19.384008  434303 node_ready.go:38] duration metric: took 7.938100191s for node "pause-019243" to be "Ready" ...
	I1018 10:23:19.384022  434303 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:23:19.384084  434303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:23:19.414807  434303 api_server.go:72] duration metric: took 8.167155242s to wait for apiserver process to appear ...
	I1018 10:23:19.414829  434303 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:23:19.414848  434303 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:23:19.493026  434303 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 10:23:19.493058  434303 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 10:23:19.915645  434303 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:23:19.925818  434303 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 10:23:19.925954  434303 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 10:23:19.251217  434147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/missing-upgrade-495276/id_rsa Username:docker}
	I1018 10:23:19.345529  434147 ssh_runner.go:195] Run: systemctl --version
	I1018 10:23:19.541895  434147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:23:19.703985  434147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1018 10:23:19.710943  434147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:23:19.751231  434147 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1018 10:23:19.751298  434147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:23:19.806647  434147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1018 10:23:19.806660  434147 start.go:472] detecting cgroup driver to use...
	I1018 10:23:19.806704  434147 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1018 10:23:19.806755  434147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:23:19.829946  434147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:23:19.853364  434147 docker.go:203] disabling cri-docker service (if available) ...
	I1018 10:23:19.853415  434147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:23:19.869674  434147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:23:19.897784  434147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:23:20.016707  434147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:23:20.170163  434147 docker.go:219] disabling docker service ...
	I1018 10:23:20.170219  434147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:23:20.198172  434147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:23:20.213245  434147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:23:20.320193  434147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:23:20.415101  434147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:23:20.435825  434147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:23:20.458821  434147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 10:23:20.458869  434147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:20.474433  434147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:23:20.474493  434147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:20.486287  434147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:20.496359  434147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:23:20.506559  434147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:23:20.515748  434147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:23:20.524508  434147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:23:20.533426  434147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:23:20.622982  434147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:23:20.734705  434147 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:23:20.734769  434147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:23:20.738187  434147 start.go:540] Will wait 60s for crictl version
	I1018 10:23:20.738237  434147 ssh_runner.go:195] Run: which crictl
	I1018 10:23:20.741657  434147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1018 10:23:20.783804  434147 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1018 10:23:20.783891  434147 ssh_runner.go:195] Run: crio --version
	I1018 10:23:20.830708  434147 ssh_runner.go:195] Run: crio --version
	I1018 10:23:20.880642  434147 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1018 10:23:20.883489  434147 cli_runner.go:164] Run: docker network inspect missing-upgrade-495276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:23:20.899626  434147 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:23:20.903340  434147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:23:20.914383  434147 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 10:23:20.914449  434147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:23:20.982397  434147 crio.go:496] all images are preloaded for cri-o runtime.
	I1018 10:23:20.982410  434147 crio.go:415] Images already preloaded, skipping extraction
	I1018 10:23:20.982465  434147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:23:21.019956  434147 crio.go:496] all images are preloaded for cri-o runtime.
	I1018 10:23:21.019969  434147 cache_images.go:84] Images are preloaded, skipping loading
	I1018 10:23:21.020057  434147 ssh_runner.go:195] Run: crio config
	I1018 10:23:21.071827  434147 cni.go:84] Creating CNI manager for ""
	I1018 10:23:21.071839  434147 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:23:21.071859  434147 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1018 10:23:21.071878  434147 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-495276 NodeName:missing-upgrade-495276 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:23:21.072065  434147 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-495276"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:23:21.072128  434147 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=missing-upgrade-495276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-495276 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1018 10:23:21.072190  434147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1018 10:23:21.081254  434147 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:23:21.081338  434147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:23:21.090406  434147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1018 10:23:21.109087  434147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:23:21.127041  434147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1018 10:23:21.144673  434147 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:23:21.148293  434147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:23:21.158782  434147 certs.go:56] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276 for IP: 192.168.85.2
	I1018 10:23:21.158804  434147 certs.go:190] acquiring lock for shared ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:21.158943  434147 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:23:21.158995  434147 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:23:21.159049  434147 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.key
	I1018 10:23:21.159057  434147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.crt with IP's: []
	I1018 10:23:22.061719  434147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.crt ...
	I1018 10:23:22.061736  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.crt: {Name:mk7abe6f2762aafb2a9e0f65218c8aed848c4c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:22.061944  434147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.key ...
	I1018 10:23:22.061952  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/client.key: {Name:mkbdf54a8eb2c7282f7ef45516193ef28a1f72e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:22.062043  434147 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key.43b9df8c
	I1018 10:23:22.062055  434147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1018 10:23:22.478507  434147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt.43b9df8c ...
	I1018 10:23:22.478522  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt.43b9df8c: {Name:mkc47a4d6191450eca19682adbdce8135345fec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:22.478703  434147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key.43b9df8c ...
	I1018 10:23:22.478711  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key.43b9df8c: {Name:mk29ee8b6929dea219bc8b18df203fee686da6e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:22.478797  434147 certs.go:337] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt
	I1018 10:23:22.478872  434147 certs.go:341] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key
	I1018 10:23:22.478920  434147 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.key
	I1018 10:23:22.478930  434147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.crt with IP's: []
	I1018 10:23:23.433916  434147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.crt ...
	I1018 10:23:23.433935  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.crt: {Name:mk619359d4957aace0b25da816bf2333104ceaab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:23.434139  434147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.key ...
	I1018 10:23:23.434147  434147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.key: {Name:mk744fcc9cd750e42a6c30f1682e066cd2d67c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:23:23.434379  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:23:23.434421  434147 certs.go:433] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:23:23.434429  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:23:23.434457  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:23:23.434480  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:23:23.434505  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:23:23.434554  434147 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:23:23.435194  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1018 10:23:23.463846  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 10:23:23.489099  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:23:23.515740  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/missing-upgrade-495276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:23:23.555259  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:23:23.582181  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:23:23.607347  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:23:23.631757  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:23:23.657379  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:23:23.683723  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:23:23.708229  434147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:23:23.733550  434147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:23:23.752500  434147 ssh_runner.go:195] Run: openssl version
	I1018 10:23:23.758479  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:23:23.768021  434147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:23:23.771886  434147 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:23:23.771942  434147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:23:23.779251  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:23:23.789341  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:23:23.799877  434147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:23:23.804199  434147 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:23:23.804286  434147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:23:23.811920  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:23:23.821770  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:23:23.831269  434147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:23.835167  434147 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:23.835225  434147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:23:23.842337  434147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:23:23.852510  434147 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1018 10:23:23.856167  434147 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1018 10:23:23.856216  434147 kubeadm.go:404] StartCluster: {Name:missing-upgrade-495276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-495276 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1018 10:23:23.856283  434147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:23:23.856338  434147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:23:23.894426  434147 cri.go:89] found id: ""
	I1018 10:23:23.894511  434147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:23:23.903810  434147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:23:23.912978  434147 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:23:23.913036  434147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:23:23.922303  434147 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:23:23.922337  434147 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:23:20.415835  434303 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:23:20.424606  434303 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 10:23:20.425785  434303 api_server.go:141] control plane version: v1.34.1
	I1018 10:23:20.425806  434303 api_server.go:131] duration metric: took 1.010970275s to wait for apiserver health ...
	I1018 10:23:20.425815  434303 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:23:20.429119  434303 system_pods.go:59] 7 kube-system pods found
	I1018 10:23:20.429152  434303 system_pods.go:61] "coredns-66bc5c9577-wzfbh" [6de6d24e-a83f-44a4-b857-2dfe3762f0ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:23:20.429164  434303 system_pods.go:61] "etcd-pause-019243" [641526b8-c065-4e61-9b44-9e121e29d662] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:23:20.429171  434303 system_pods.go:61] "kindnet-9p267" [77f6445a-e9ac-4649-97a9-01e4119993f6] Running
	I1018 10:23:20.429178  434303 system_pods.go:61] "kube-apiserver-pause-019243" [de398340-a205-4697-91b8-bcee5807c22a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:23:20.429286  434303 system_pods.go:61] "kube-controller-manager-pause-019243" [8c88d47c-5404-4458-b28c-b6785c57b652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:23:20.429293  434303 system_pods.go:61] "kube-proxy-9ph8v" [cffc1e4e-0867-497b-9adf-a0e9b98374b5] Running
	I1018 10:23:20.429299  434303 system_pods.go:61] "kube-scheduler-pause-019243" [f053abc5-7513-4b80-aac1-61ac9edc79a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:23:20.429305  434303 system_pods.go:74] duration metric: took 3.473119ms to wait for pod list to return data ...
	I1018 10:23:20.429320  434303 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:23:20.432519  434303 default_sa.go:45] found service account: "default"
	I1018 10:23:20.432538  434303 default_sa.go:55] duration metric: took 3.212796ms for default service account to be created ...
	I1018 10:23:20.432547  434303 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:23:20.438014  434303 system_pods.go:86] 7 kube-system pods found
	I1018 10:23:20.438048  434303 system_pods.go:89] "coredns-66bc5c9577-wzfbh" [6de6d24e-a83f-44a4-b857-2dfe3762f0ad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:23:20.438057  434303 system_pods.go:89] "etcd-pause-019243" [641526b8-c065-4e61-9b44-9e121e29d662] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:23:20.438064  434303 system_pods.go:89] "kindnet-9p267" [77f6445a-e9ac-4649-97a9-01e4119993f6] Running
	I1018 10:23:20.438070  434303 system_pods.go:89] "kube-apiserver-pause-019243" [de398340-a205-4697-91b8-bcee5807c22a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:23:20.438077  434303 system_pods.go:89] "kube-controller-manager-pause-019243" [8c88d47c-5404-4458-b28c-b6785c57b652] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:23:20.438081  434303 system_pods.go:89] "kube-proxy-9ph8v" [cffc1e4e-0867-497b-9adf-a0e9b98374b5] Running
	I1018 10:23:20.438087  434303 system_pods.go:89] "kube-scheduler-pause-019243" [f053abc5-7513-4b80-aac1-61ac9edc79a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:23:20.438093  434303 system_pods.go:126] duration metric: took 5.540805ms to wait for k8s-apps to be running ...
	I1018 10:23:20.438102  434303 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:23:20.438154  434303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:23:20.454351  434303 system_svc.go:56] duration metric: took 16.232823ms WaitForService to wait for kubelet
	I1018 10:23:20.454385  434303 kubeadm.go:586] duration metric: took 9.206735097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:23:20.454405  434303 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:23:20.457865  434303 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:23:20.457910  434303 node_conditions.go:123] node cpu capacity is 2
	I1018 10:23:20.457926  434303 node_conditions.go:105] duration metric: took 3.515842ms to run NodePressure ...
	I1018 10:23:20.457938  434303 start.go:241] waiting for startup goroutines ...
	I1018 10:23:20.457945  434303 start.go:246] waiting for cluster config update ...
	I1018 10:23:20.457957  434303 start.go:255] writing updated cluster config ...
	I1018 10:23:20.458285  434303 ssh_runner.go:195] Run: rm -f paused
	I1018 10:23:20.463687  434303 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:23:20.464572  434303 kapi.go:59] client config for pause-019243: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/profiles/pause-019243/client.key", CAFile:"/home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 10:23:20.472859  434303 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wzfbh" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 10:23:22.485623  434303 pod_ready.go:104] pod "coredns-66bc5c9577-wzfbh" is not "Ready", error: <nil>
	W1018 10:23:24.979125  434303 pod_ready.go:104] pod "coredns-66bc5c9577-wzfbh" is not "Ready", error: <nil>
	I1018 10:23:24.276754  434147 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1018 10:23:24.277097  434147 kubeadm.go:322] [preflight] Running pre-flight checks
	I1018 10:23:24.331543  434147 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:23:24.331602  434147 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:23:24.331634  434147 kubeadm.go:322] OS: Linux
	I1018 10:23:24.331676  434147 kubeadm.go:322] CGROUPS_CPU: enabled
	I1018 10:23:24.331720  434147 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1018 10:23:24.331763  434147 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1018 10:23:24.331807  434147 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1018 10:23:24.331851  434147 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1018 10:23:24.331895  434147 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1018 10:23:24.331936  434147 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1018 10:23:24.331980  434147 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1018 10:23:24.332022  434147 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1018 10:23:24.923037  434147 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:23:24.923242  434147 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:23:24.923347  434147 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 10:23:25.194726  434147 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:23:25.197814  434147 out.go:204]   - Generating certificates and keys ...
	I1018 10:23:25.198003  434147 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1018 10:23:25.201507  434147 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1018 10:23:25.951742  434147 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:23:26.153370  434147 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:23:26.509977  434147 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:23:27.029782  434147 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1018 10:23:27.364560  434147 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1018 10:23:27.365007  434147 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-495276] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:23:28.021913  434147 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1018 10:23:28.022281  434147 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-495276] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:23:28.447855  434147 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:23:29.025357  434147 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:23:25.479101  434303 pod_ready.go:94] pod "coredns-66bc5c9577-wzfbh" is "Ready"
	I1018 10:23:25.479124  434303 pod_ready.go:86] duration metric: took 5.006230461s for pod "coredns-66bc5c9577-wzfbh" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:25.482098  434303 pod_ready.go:83] waiting for pod "etcd-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 10:23:27.493113  434303 pod_ready.go:104] pod "etcd-pause-019243" is not "Ready", error: <nil>
	W1018 10:23:29.987155  434303 pod_ready.go:104] pod "etcd-pause-019243" is not "Ready", error: <nil>
	I1018 10:23:30.988347  434303 pod_ready.go:94] pod "etcd-pause-019243" is "Ready"
	I1018 10:23:30.988372  434303 pod_ready.go:86] duration metric: took 5.506252831s for pod "etcd-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:30.994821  434303 pod_ready.go:83] waiting for pod "kube-apiserver-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.000428  434303 pod_ready.go:94] pod "kube-apiserver-pause-019243" is "Ready"
	I1018 10:23:31.000449  434303 pod_ready.go:86] duration metric: took 5.604438ms for pod "kube-apiserver-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.003226  434303 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.008713  434303 pod_ready.go:94] pod "kube-controller-manager-pause-019243" is "Ready"
	I1018 10:23:31.008789  434303 pod_ready.go:86] duration metric: took 5.542392ms for pod "kube-controller-manager-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.011441  434303 pod_ready.go:83] waiting for pod "kube-proxy-9ph8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.185524  434303 pod_ready.go:94] pod "kube-proxy-9ph8v" is "Ready"
	I1018 10:23:31.185593  434303 pod_ready.go:86] duration metric: took 174.124884ms for pod "kube-proxy-9ph8v" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.385495  434303 pod_ready.go:83] waiting for pod "kube-scheduler-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.791265  434303 pod_ready.go:94] pod "kube-scheduler-pause-019243" is "Ready"
	I1018 10:23:31.791297  434303 pod_ready.go:86] duration metric: took 405.734612ms for pod "kube-scheduler-pause-019243" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:23:31.791310  434303 pod_ready.go:40] duration metric: took 11.327581601s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:23:31.868526  434303 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:23:31.871806  434303 out.go:179] * Done! kubectl is now configured to use "pause-019243" cluster and "default" namespace by default
	I1018 10:23:29.886097  434147 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1018 10:23:29.886387  434147 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:23:30.376511  434147 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:23:30.693630  434147 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:23:31.302421  434147 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:23:31.875177  434147 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:23:31.876166  434147 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:23:31.878826  434147 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:23:31.882105  434147 out.go:204]   - Booting up control plane ...
	I1018 10:23:31.882271  434147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:23:31.882512  434147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:23:31.886032  434147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:23:31.898367  434147 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:23:31.899012  434147 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:23:31.899261  434147 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1018 10:23:32.057808  434147 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.221994551Z" level=info msg="Starting container: e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333" id=7a584b17-0fa6-4c31-b8a5-c21e4a4c6f1b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.222029251Z" level=info msg="Starting container: 06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1" id=726c0a26-0400-40ce-bd89-1c961fab11c9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.231929122Z" level=info msg="Started container" PID=2399 containerID=06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1 description=kube-system/kube-scheduler-pause-019243/kube-scheduler id=726c0a26-0400-40ce-bd89-1c961fab11c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c7cf07ac8cc00bd65773914c0557f9b13031b47576090d13d6e848898988561
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.23773472Z" level=info msg="Started container" PID=2398 containerID=e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333 description=kube-system/kube-apiserver-pause-019243/kube-apiserver id=7a584b17-0fa6-4c31-b8a5-c21e4a4c6f1b name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bd630ccfaf8403b818efdb20229bfd413e6d1ef3d74d59aa896ce93febbc4df
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.241394395Z" level=info msg="Created container d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93: kube-system/kube-controller-manager-pause-019243/kube-controller-manager" id=3909bf18-7adc-458c-ab09-49641386e8e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.245620648Z" level=info msg="Starting container: d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93" id=674f2fd1-8b9a-43ba-9352-ddcbf96e2e67 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.273918893Z" level=info msg="Started container" PID=2410 containerID=d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93 description=kube-system/kube-controller-manager-pause-019243/kube-controller-manager id=674f2fd1-8b9a-43ba-9352-ddcbf96e2e67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0933f35c7907f4e18c22625be2aa68656352f9201338e9484e5e6349d9f95151
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.284033975Z" level=info msg="Created container e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18: kube-system/etcd-pause-019243/etcd" id=79d2b93b-4f1d-4195-9f39-7eaf67d6104b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.286360326Z" level=info msg="Starting container: e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18" id=0ed49bc9-ba36-4df1-ba4f-c070a5b83d83 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:23:13 pause-019243 crio[2093]: time="2025-10-18T10:23:13.304822847Z" level=info msg="Started container" PID=2403 containerID=e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18 description=kube-system/etcd-pause-019243/etcd id=0ed49bc9-ba36-4df1-ba4f-c070a5b83d83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5828504b69145c55a494e06aaf34bba138c29b377522216f14394718a5db16ab
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.525959114Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.529905214Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.52994793Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.529968336Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.534496373Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.534541706Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.534562736Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.53922326Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.539259986Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.539279744Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.546252475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.546297636Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.546319872Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.554221Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:23:23 pause-019243 crio[2093]: time="2025-10-18T10:23:23.554254928Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d7c9eaf75f0a5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   26 seconds ago       Running             kube-controller-manager   1                   0933f35c7907f       kube-controller-manager-pause-019243   kube-system
	e4701015deca0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   26 seconds ago       Running             etcd                      1                   5828504b69145       etcd-pause-019243                      kube-system
	06d43ac35777a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   26 seconds ago       Running             kube-scheduler            1                   5c7cf07ac8cc0       kube-scheduler-pause-019243            kube-system
	e9fc519ce7871       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   26 seconds ago       Running             kube-apiserver            1                   1bd630ccfaf84       kube-apiserver-pause-019243            kube-system
	15128da41ead8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago       Running             coredns                   1                   badf1df91e20d       coredns-66bc5c9577-wzfbh               kube-system
	09054465f840b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   26 seconds ago       Running             kindnet-cni               1                   edc7da5e59966       kindnet-9p267                          kube-system
	d28ad7321d63b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   26 seconds ago       Running             kube-proxy                1                   e144ff6b3a6f1       kube-proxy-9ph8v                       kube-system
	0ae979551b18e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   42 seconds ago       Exited              coredns                   0                   badf1df91e20d       coredns-66bc5c9577-wzfbh               kube-system
	006738fd96b2a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   e144ff6b3a6f1       kube-proxy-9ph8v                       kube-system
	3cc277a3092b1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   edc7da5e59966       kindnet-9p267                          kube-system
	8600f9e890592       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   5c7cf07ac8cc0       kube-scheduler-pause-019243            kube-system
	888e1c745ae86       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   1bd630ccfaf84       kube-apiserver-pause-019243            kube-system
	b0c8a1278a6d6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   5828504b69145       etcd-pause-019243                      kube-system
	d9c28587d4861       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   0933f35c7907f       kube-controller-manager-pause-019243   kube-system
	
	
	==> coredns [0ae979551b18ee12476387ea61edbf996097504d4837b5af82bca211b75cbe5c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46246 - 20172 "HINFO IN 2630737520422832076.2396624382892307287. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020162302s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [15128da41ead8115fa2f84a7672dd4abe119002449d59f70960273bd6e459027] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47169 - 10573 "HINFO IN 7964605365171995352.3829374612193345895. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.054716163s
	
	
	==> describe nodes <==
	Name:               pause-019243
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-019243
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=pause-019243
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_22_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:22:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-019243
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:23:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:23:30 +0000   Sat, 18 Oct 2025 10:21:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:23:30 +0000   Sat, 18 Oct 2025 10:21:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:23:30 +0000   Sat, 18 Oct 2025 10:21:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:23:30 +0000   Sat, 18 Oct 2025 10:22:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-019243
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6b383227-2406-4550-a6ab-0b7bf44092aa
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-wzfbh                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     87s
	  kube-system                 etcd-pause-019243                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         95s
	  kube-system                 kindnet-9p267                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-pause-019243             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-pause-019243    200m (10%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-proxy-9ph8v                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-pause-019243             100m (5%)     0 (0%)      0 (0%)           0 (0%)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 83s                  kube-proxy       
	  Normal   Starting                 20s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  104s (x8 over 105s)  kubelet          Node pause-019243 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    104s (x8 over 105s)  kubelet          Node pause-019243 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s (x8 over 105s)  kubelet          Node pause-019243 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 92s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s                  kubelet          Node pause-019243 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s                  kubelet          Node pause-019243 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s                  kubelet          Node pause-019243 status is now: NodeHasSufficientPID
	  Normal   Starting                 92s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           88s                  node-controller  Node pause-019243 event: Registered Node pause-019243 in Controller
	  Normal   NodeReady                43s                  kubelet          Node pause-019243 status is now: NodeReady
	  Warning  ContainerGCFailed        32s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           18s                  node-controller  Node pause-019243 event: Registered Node pause-019243 in Controller
	
	
	==> dmesg <==
	[Oct18 09:58] overlayfs: idmapped layers are currently not supported
	[  +3.833371] overlayfs: idmapped layers are currently not supported
	[Oct18 10:00] overlayfs: idmapped layers are currently not supported
	[Oct18 10:01] overlayfs: idmapped layers are currently not supported
	[Oct18 10:02] overlayfs: idmapped layers are currently not supported
	[  +3.752225] overlayfs: idmapped layers are currently not supported
	[Oct18 10:03] overlayfs: idmapped layers are currently not supported
	[ +25.695966] overlayfs: idmapped layers are currently not supported
	[Oct18 10:05] overlayfs: idmapped layers are currently not supported
	[Oct18 10:10] overlayfs: idmapped layers are currently not supported
	[ +35.463301] overlayfs: idmapped layers are currently not supported
	[Oct18 10:11] overlayfs: idmapped layers are currently not supported
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5] <==
	{"level":"warn","ts":"2025-10-18T10:22:02.209752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.236453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.287104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.302065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.349712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.393931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:22:02.608372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45872","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T10:23:01.968358Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T10:23:01.968409Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-019243","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-18T10:23:01.968505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T10:23:01.968560Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-18T10:23:02.533159Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T10:23:02.533252Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T10:23:02.533261Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T10:23:02.533325Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T10:23:02.533335Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T10:23:02.533340Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-18T10:23:02.533356Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T10:23:02.533375Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-18T10:23:02.533502Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T10:23:02.533532Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T10:23:02.536744Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-18T10:23:02.536823Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T10:23:02.536859Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T10:23:02.536866Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-019243","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [e4701015deca0aaf19fd21f101a1b2f4db45f40d88473ec3a8eb76be901e6b18] <==
	{"level":"warn","ts":"2025-10-18T10:23:17.264354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.294129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.313596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.401417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.411122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.445465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.463784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.506107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.559677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.610154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.694537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.739590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.765745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.794901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.816281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.842230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.869893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.896104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.945921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:17.978208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.024834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.057696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.093105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.117239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:23:18.195837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45078","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:23:40 up  2:06,  0 user,  load average: 3.05, 2.34, 2.07
	Linux pause-019243 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [09054465f840b5bd38d7d4516d56642a1c6df2c6eb394ff6de2428c47a2a957d] <==
	I1018 10:23:13.252597       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:23:13.253629       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:23:13.253785       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:23:13.253798       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:23:13.253811       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:23:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:23:13.527352       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:23:13.527508       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:23:13.527664       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:23:13.531188       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 10:23:19.429844       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:23:19.429960       1 metrics.go:72] Registering metrics
	I1018 10:23:19.430096       1 controller.go:711] "Syncing nftables rules"
	I1018 10:23:23.525473       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:23:23.525631       1 main.go:301] handling current node
	I1018 10:23:33.524638       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:23:33.524697       1 main.go:301] handling current node
	
	
	==> kindnet [3cc277a3092b1996e080de34cfec6f38d30c32c1fb580882942cb48454483741] <==
	I1018 10:22:16.237483       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:22:16.237878       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:22:16.249360       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:22:16.249450       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:22:16.249490       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:22:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:22:16.458074       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:22:16.458266       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:22:16.458321       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:22:16.463699       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:22:46.458415       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 10:22:46.463140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 10:22:46.463235       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:22:46.464738       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 10:22:47.962694       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:22:47.962817       1 metrics.go:72] Registering metrics
	I1018 10:22:47.962910       1 controller.go:711] "Syncing nftables rules"
	I1018 10:22:56.461680       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:22:56.461731       1 main.go:301] handling current node
	
	
	==> kube-apiserver [888e1c745ae86edf3cfb0b8124645f0fd6da8c2376869b3e66ea6f0930abf181] <==
	W1018 10:23:01.976550       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.976624       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.976675       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1018 10:23:01.985880       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W1018 10:23:01.986149       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986319       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986444       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986597       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986703       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986846       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.986957       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987057       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987189       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987299       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987456       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987679       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987793       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.987940       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988056       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988185       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988289       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988443       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988634       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.988780       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 10:23:01.994567       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e9fc519ce787134d9fb283ac3940bcdbcda1de76ea88d17fe5e11bd56e515333] <==
	I1018 10:23:19.324300       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:23:19.343346       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 10:23:19.343517       1 aggregator.go:171] initial CRD sync complete...
	I1018 10:23:19.343564       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 10:23:19.343629       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 10:23:19.343751       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 10:23:19.343785       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:23:19.352498       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 10:23:19.358684       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:23:19.362290       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 10:23:19.366600       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 10:23:19.366794       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:23:19.373223       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 10:23:19.373373       1 policy_source.go:240] refreshing policies
	I1018 10:23:19.404117       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:23:19.449983       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 10:23:19.453674       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:23:19.453955       1 cache.go:39] Caches are synced for LocalAvailability controller
	E1018 10:23:19.493106       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 10:23:19.955080       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:23:21.273645       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:23:22.765717       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:23:22.792028       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:23:22.840280       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:23:22.993217       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [d7c9eaf75f0a52bfc264229bcdb9b422cf7b433d030219d78d959d7090553e93] <==
	I1018 10:23:22.738690       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 10:23:22.742381       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:23:22.744507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 10:23:22.744669       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:23:22.749383       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 10:23:22.749785       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:23:22.751077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 10:23:22.751170       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 10:23:22.751223       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 10:23:22.756832       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:23:22.757747       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:23:22.759864       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 10:23:22.775745       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:23:22.783325       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:23:22.783595       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 10:23:22.783687       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 10:23:22.784739       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 10:23:22.786881       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:23:22.787062       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 10:23:22.788111       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 10:23:22.789491       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 10:23:22.789564       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 10:23:22.799576       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:23:22.803942       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:23:22.808406       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-controller-manager [d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742] <==
	I1018 10:22:12.633726       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 10:22:12.642279       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:22:12.661904       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 10:22:12.662160       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 10:22:12.662354       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:22:12.662513       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 10:22:12.669981       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 10:22:12.670089       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 10:22:12.670118       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 10:22:12.670311       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 10:22:12.673345       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:22:12.678289       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:22:12.678388       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 10:22:12.705252       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-019243" podCIDRs=["10.244.0.0/24"]
	I1018 10:22:12.762258       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:22:12.789448       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:22:12.798658       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:22:12.799103       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:22:12.819579       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 10:22:12.819841       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 10:22:12.969732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:22:13.013419       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:22:13.013515       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:22:13.013545       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:22:57.620189       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [006738fd96b2a20ec03049da106472c554433b20145062583aebec83cb373d89] <==
	I1018 10:22:16.477088       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:22:16.735036       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:22:16.836064       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:22:16.836102       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:22:16.836178       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:22:16.900274       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:22:16.900409       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:22:16.912233       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:22:16.918642       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:22:16.918980       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:22:16.920424       1 config.go:200] "Starting service config controller"
	I1018 10:22:16.920486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:22:16.920535       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:22:16.920563       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:22:16.920604       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:22:16.920643       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:22:16.927005       1 config.go:309] "Starting node config controller"
	I1018 10:22:16.927094       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:22:16.927124       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:22:17.021602       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:22:17.021611       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 10:22:17.021630       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d28ad7321d63b96d8f407c66665665893126b46543ebff1b3dbf9af6d6c2dfa7] <==
	I1018 10:23:16.317455       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:23:17.425796       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:23:19.463505       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:23:19.463613       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:23:19.463725       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:23:19.591139       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:23:19.591192       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:23:19.623595       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:23:19.623933       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:23:19.623988       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:23:19.625420       1 config.go:200] "Starting service config controller"
	I1018 10:23:19.625498       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:23:19.625716       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:23:19.625788       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:23:19.625839       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:23:19.625867       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:23:19.626529       1 config.go:309] "Starting node config controller"
	I1018 10:23:19.626582       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:23:19.626610       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:23:19.735317       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:23:19.735455       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 10:23:19.735558       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [06d43ac35777a88f3e530cc1738680e465f2c9e6e963d7941bfd460e7bbddbd1] <==
	I1018 10:23:16.511920       1 serving.go:386] Generated self-signed cert in-memory
	W1018 10:23:19.214400       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 10:23:19.214526       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 10:23:19.214562       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 10:23:19.214626       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 10:23:19.435481       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:23:19.444062       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:23:19.459055       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:23:19.459166       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:23:19.459833       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:23:19.459935       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:23:19.559737       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [8600f9e89059224c9e5954596534b99d00dc73824984fc82abde77714c802a01] <==
	I1018 10:22:01.675667       1 serving.go:386] Generated self-signed cert in-memory
	W1018 10:22:06.714009       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 10:22:06.714126       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 10:22:06.714161       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 10:22:06.714211       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 10:22:06.775190       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:22:06.775301       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:22:06.777800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:22:06.777978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:22:06.778025       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:22:06.778076       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 10:22:06.812502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 10:22:07.778375       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:23:01.990910       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 10:23:01.996272       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 10:23:01.996361       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 10:23:01.996419       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:23:01.999437       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 10:23:01.999789       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.024816    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-wzfbh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6de6d24e-a83f-44a4-b857-2dfe3762f0ad" pod="kube-system/coredns-66bc5c9577-wzfbh"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.025339    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="756bdd1668761891cf05525b0230f65f" pod="kube-system/etcd-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.025561    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="10d0834ecec333ea59092e48efdde8b7" pod="kube-system/kube-apiserver-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: I1018 10:23:13.044414    1303 scope.go:117] "RemoveContainer" containerID="b0c8a1278a6d644d49e8aa83478280670ab2c3020dc228a9b4dfe7c86b1f20f5"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.044958    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="10d0834ecec333ea59092e48efdde8b7" pod="kube-system/kube-apiserver-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.045531    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="770e4b281efe92e27e8c8070495183a9" pod="kube-system/kube-scheduler-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.045836    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9p267\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="77f6445a-e9ac-4649-97a9-01e4119993f6" pod="kube-system/kindnet-9p267"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.046129    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ph8v\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cffc1e4e-0867-497b-9adf-a0e9b98374b5" pod="kube-system/kube-proxy-9ph8v"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.046395    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-wzfbh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6de6d24e-a83f-44a4-b857-2dfe3762f0ad" pod="kube-system/coredns-66bc5c9577-wzfbh"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.046761    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="756bdd1668761891cf05525b0230f65f" pod="kube-system/etcd-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.051917    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="07d6d73554a3524b20c45fc6c7fce5a6" pod="kube-system/kube-controller-manager-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.052383    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="756bdd1668761891cf05525b0230f65f" pod="kube-system/etcd-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: I1018 10:23:13.052746    1303 scope.go:117] "RemoveContainer" containerID="d9c28587d48616a8fdebac1348a88dc7e223b29162f351fcf753a56d430aa742"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.053776    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="10d0834ecec333ea59092e48efdde8b7" pod="kube-system/kube-apiserver-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.054012    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-019243\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="770e4b281efe92e27e8c8070495183a9" pod="kube-system/kube-scheduler-pause-019243"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.054176    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9p267\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="77f6445a-e9ac-4649-97a9-01e4119993f6" pod="kube-system/kindnet-9p267"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.054562    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ph8v\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="cffc1e4e-0867-497b-9adf-a0e9b98374b5" pod="kube-system/kube-proxy-9ph8v"
	Oct 18 10:23:13 pause-019243 kubelet[1303]: E1018 10:23:13.054742    1303 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-wzfbh\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6de6d24e-a83f-44a4-b857-2dfe3762f0ad" pod="kube-system/coredns-66bc5c9577-wzfbh"
	Oct 18 10:23:18 pause-019243 kubelet[1303]: W1018 10:23:18.987508    1303 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 10:23:19 pause-019243 kubelet[1303]: E1018 10:23:19.171091    1303 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-019243\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-019243' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 10:23:19 pause-019243 kubelet[1303]: E1018 10:23:19.171524    1303 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-019243\" is forbidden: User \"system:node:pause-019243\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-019243' and this object" podUID="770e4b281efe92e27e8c8070495183a9" pod="kube-system/kube-scheduler-pause-019243"
	Oct 18 10:23:19 pause-019243 kubelet[1303]: E1018 10:23:19.275196    1303 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9p267\" is forbidden: User \"system:node:pause-019243\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-019243' and this object" podUID="77f6445a-e9ac-4649-97a9-01e4119993f6" pod="kube-system/kindnet-9p267"
	Oct 18 10:23:32 pause-019243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:23:32 pause-019243 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:23:32 pause-019243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-019243 -n pause-019243
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-019243 -n pause-019243: exit status 2 (404.350648ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-019243 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (255.347976ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:29:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-309062 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-309062 describe deploy/metrics-server -n kube-system: exit status 1 (86.017154ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-309062 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-309062
helpers_test.go:243: (dbg) docker inspect old-k8s-version-309062:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750",
	        "Created": "2025-10-18T10:28:48.73837051Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 467235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:28:48.819325992Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/hostname",
	        "HostsPath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/hosts",
	        "LogPath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750-json.log",
	        "Name": "/old-k8s-version-309062",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-309062:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-309062",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750",
	                "LowerDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656/merged",
	                "UpperDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656/diff",
	                "WorkDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-309062",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-309062/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-309062",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-309062",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-309062",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df72d9b8fee87f8bd78d88a94c314a582541d63b2261c50741e7faa6d87ab585",
	            "SandboxKey": "/var/run/docker/netns/df72d9b8fee8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-309062": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:c2:ba:45:f6:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "082c8a75e8eb3b8d93bfcaf0e7df425e066e901e2d22d2638140f1c9d2501c82",
	                    "EndpointID": "bfd092982fd143e1b63d971ab8a2f767cf44ceddd37315e3c16cd22259a381b5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-309062",
	                        "ef75e2f86668"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309062 -n old-k8s-version-309062
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-309062 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-309062 logs -n 25: (1.227481257s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-881658 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo docker system info                                                                                                                                                                                                      │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo containerd config dump                                                                                                                                                                                                  │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo crio config                                                                                                                                                                                                             │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ delete  │ -p cilium-881658                                                                                                                                                                                                                              │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:27 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-733799   │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-360583                                                                                                                                                                                                                   │ force-systemd-env-360583 │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-233372 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-233372 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-233372 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-233372                                                                                                                                                                                                                        │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:28:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:28:42.626675  466848 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:28:42.627297  466848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:28:42.627312  466848 out.go:374] Setting ErrFile to fd 2...
	I1018 10:28:42.627318  466848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:28:42.627661  466848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:28:42.628185  466848 out.go:368] Setting JSON to false
	I1018 10:28:42.629312  466848 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7873,"bootTime":1760775450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:28:42.629386  466848 start.go:141] virtualization:  
	I1018 10:28:42.633030  466848 out.go:179] * [old-k8s-version-309062] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:28:42.637605  466848 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:28:42.637701  466848 notify.go:220] Checking for updates...
	I1018 10:28:42.644282  466848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:28:42.647709  466848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:28:42.650890  466848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:28:42.654655  466848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:28:42.657723  466848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:28:42.661462  466848 config.go:182] Loaded profile config "cert-expiration-733799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:28:42.661594  466848 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:28:42.690697  466848 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:28:42.690841  466848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:28:42.748653  466848 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:28:42.738979459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:28:42.748783  466848 docker.go:318] overlay module found
	I1018 10:28:42.752463  466848 out.go:179] * Using the docker driver based on user configuration
	I1018 10:28:42.755871  466848 start.go:305] selected driver: docker
	I1018 10:28:42.755893  466848 start.go:925] validating driver "docker" against <nil>
	I1018 10:28:42.755907  466848 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:28:42.756773  466848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:28:42.812313  466848 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:28:42.803828232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:28:42.812471  466848 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 10:28:42.812697  466848 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:28:42.815614  466848 out.go:179] * Using Docker driver with root privileges
	I1018 10:28:42.818433  466848 cni.go:84] Creating CNI manager for ""
	I1018 10:28:42.818507  466848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:28:42.818520  466848 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 10:28:42.818599  466848 start.go:349] cluster config:
	{Name:old-k8s-version-309062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-309062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:28:42.821680  466848 out.go:179] * Starting "old-k8s-version-309062" primary control-plane node in "old-k8s-version-309062" cluster
	I1018 10:28:42.824542  466848 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:28:42.827436  466848 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:28:42.830257  466848 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 10:28:42.830302  466848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:28:42.830311  466848 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 10:28:42.830329  466848 cache.go:58] Caching tarball of preloaded images
	I1018 10:28:42.830411  466848 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:28:42.830420  466848 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 10:28:42.830525  466848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/config.json ...
	I1018 10:28:42.830565  466848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/config.json: {Name:mk4f93ecd93f60855e251811b0949c6897eef1af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:28:42.849452  466848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:28:42.849477  466848 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:28:42.849496  466848 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:28:42.849518  466848 start.go:360] acquireMachinesLock for old-k8s-version-309062: {Name:mk333aedf28dfb10369ff3bbd67e5aaa24750284 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:28:42.849663  466848 start.go:364] duration metric: took 123.349µs to acquireMachinesLock for "old-k8s-version-309062"
	I1018 10:28:42.849692  466848 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-309062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-309062 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:28:42.849790  466848 start.go:125] createHost starting for "" (driver="docker")
	I1018 10:28:42.855033  466848 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:28:42.855283  466848 start.go:159] libmachine.API.Create for "old-k8s-version-309062" (driver="docker")
	I1018 10:28:42.855329  466848 client.go:168] LocalClient.Create starting
	I1018 10:28:42.855420  466848 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:28:42.855457  466848 main.go:141] libmachine: Decoding PEM data...
	I1018 10:28:42.855478  466848 main.go:141] libmachine: Parsing certificate...
	I1018 10:28:42.855545  466848 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:28:42.855569  466848 main.go:141] libmachine: Decoding PEM data...
	I1018 10:28:42.855585  466848 main.go:141] libmachine: Parsing certificate...
	I1018 10:28:42.855949  466848 cli_runner.go:164] Run: docker network inspect old-k8s-version-309062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:28:42.870797  466848 cli_runner.go:211] docker network inspect old-k8s-version-309062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:28:42.870880  466848 network_create.go:284] running [docker network inspect old-k8s-version-309062] to gather additional debugging logs...
	I1018 10:28:42.870900  466848 cli_runner.go:164] Run: docker network inspect old-k8s-version-309062
	W1018 10:28:42.885771  466848 cli_runner.go:211] docker network inspect old-k8s-version-309062 returned with exit code 1
	I1018 10:28:42.885802  466848 network_create.go:287] error running [docker network inspect old-k8s-version-309062]: docker network inspect old-k8s-version-309062: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-309062 not found
	I1018 10:28:42.885816  466848 network_create.go:289] output of [docker network inspect old-k8s-version-309062]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-309062 not found
	
	** /stderr **
	I1018 10:28:42.885949  466848 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:28:42.901423  466848 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:28:42.901698  466848 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:28:42.902036  466848 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:28:42.902457  466848 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a14040}
	I1018 10:28:42.902481  466848 network_create.go:124] attempt to create docker network old-k8s-version-309062 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 10:28:42.902547  466848 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-309062 old-k8s-version-309062
	I1018 10:28:42.972515  466848 network_create.go:108] docker network old-k8s-version-309062 192.168.76.0/24 created
	I1018 10:28:42.972546  466848 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-309062" container
	I1018 10:28:42.972615  466848 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:28:42.989421  466848 cli_runner.go:164] Run: docker volume create old-k8s-version-309062 --label name.minikube.sigs.k8s.io=old-k8s-version-309062 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:28:43.007359  466848 oci.go:103] Successfully created a docker volume old-k8s-version-309062
	I1018 10:28:43.007445  466848 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-309062-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-309062 --entrypoint /usr/bin/test -v old-k8s-version-309062:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 10:28:43.509639  466848 oci.go:107] Successfully prepared a docker volume old-k8s-version-309062
	I1018 10:28:43.509685  466848 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 10:28:43.509705  466848 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 10:28:43.509781  466848 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-309062:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 10:28:48.657712  466848 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-309062:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.147882986s)
	I1018 10:28:48.657745  466848 kic.go:203] duration metric: took 5.148036316s to extract preloaded images to volume ...
	W1018 10:28:48.657880  466848 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:28:48.657983  466848 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:28:48.714877  466848 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-309062 --name old-k8s-version-309062 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-309062 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-309062 --network old-k8s-version-309062 --ip 192.168.76.2 --volume old-k8s-version-309062:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:28:49.058589  466848 cli_runner.go:164] Run: docker container inspect old-k8s-version-309062 --format={{.State.Running}}
	I1018 10:28:49.084464  466848 cli_runner.go:164] Run: docker container inspect old-k8s-version-309062 --format={{.State.Status}}
	I1018 10:28:49.108003  466848 cli_runner.go:164] Run: docker exec old-k8s-version-309062 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:28:49.158693  466848 oci.go:144] the created container "old-k8s-version-309062" has a running status.
	I1018 10:28:49.158744  466848 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa...
	I1018 10:28:49.563381  466848 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:28:49.591242  466848 cli_runner.go:164] Run: docker container inspect old-k8s-version-309062 --format={{.State.Status}}
	I1018 10:28:49.616700  466848 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:28:49.616720  466848 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-309062 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:28:49.669957  466848 cli_runner.go:164] Run: docker container inspect old-k8s-version-309062 --format={{.State.Status}}
	I1018 10:28:49.689324  466848 machine.go:93] provisionDockerMachine start ...
	I1018 10:28:49.689416  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:28:49.707136  466848 main.go:141] libmachine: Using SSH client type: native
	I1018 10:28:49.707491  466848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1018 10:28:49.707506  466848 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:28:49.708161  466848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44318->127.0.0.1:33419: read: connection reset by peer
	I1018 10:28:52.856915  466848 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-309062
	
	I1018 10:28:52.856982  466848 ubuntu.go:182] provisioning hostname "old-k8s-version-309062"
	I1018 10:28:52.857063  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:28:52.874117  466848 main.go:141] libmachine: Using SSH client type: native
	I1018 10:28:52.874425  466848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1018 10:28:52.874441  466848 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-309062 && echo "old-k8s-version-309062" | sudo tee /etc/hostname
	I1018 10:28:53.031193  466848 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-309062
	
	I1018 10:28:53.031283  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:28:53.048894  466848 main.go:141] libmachine: Using SSH client type: native
	I1018 10:28:53.049241  466848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1018 10:28:53.049269  466848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-309062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-309062/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-309062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:28:53.201597  466848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:28:53.201626  466848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:28:53.201654  466848 ubuntu.go:190] setting up certificates
	I1018 10:28:53.201664  466848 provision.go:84] configureAuth start
	I1018 10:28:53.201734  466848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-309062
	I1018 10:28:53.219415  466848 provision.go:143] copyHostCerts
	I1018 10:28:53.219487  466848 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:28:53.219499  466848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:28:53.219574  466848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:28:53.219674  466848 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:28:53.219683  466848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:28:53.219710  466848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:28:53.219776  466848 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:28:53.219785  466848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:28:53.219809  466848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:28:53.219865  466848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-309062 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-309062]
	I1018 10:28:53.589128  466848 provision.go:177] copyRemoteCerts
	I1018 10:28:53.589217  466848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:28:53.589257  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:28:53.606937  466848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa Username:docker}
	I1018 10:28:53.709297  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:28:53.727394  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 10:28:53.745458  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 10:28:53.763692  466848 provision.go:87] duration metric: took 562.01236ms to configureAuth
	I1018 10:28:53.763720  466848 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:28:53.763968  466848 config.go:182] Loaded profile config "old-k8s-version-309062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 10:28:53.764111  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:28:53.783354  466848 main.go:141] libmachine: Using SSH client type: native
	I1018 10:28:53.783677  466848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I1018 10:28:53.783698  466848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:28:54.050723  466848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:28:54.050744  466848 machine.go:96] duration metric: took 4.361395415s to provisionDockerMachine
	I1018 10:28:54.050754  466848 client.go:171] duration metric: took 11.195413787s to LocalClient.Create
	I1018 10:28:54.050777  466848 start.go:167] duration metric: took 11.195495888s to libmachine.API.Create "old-k8s-version-309062"
	I1018 10:28:54.050786  466848 start.go:293] postStartSetup for "old-k8s-version-309062" (driver="docker")
	I1018 10:28:54.050796  466848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:28:54.050865  466848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:28:54.050910  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:28:54.073255  466848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa Username:docker}
	I1018 10:28:54.181950  466848 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:28:54.185806  466848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:28:54.185837  466848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:28:54.185849  466848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:28:54.185907  466848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:28:54.185989  466848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:28:54.186097  466848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:28:54.194136  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:28:54.213054  466848 start.go:296] duration metric: took 162.252709ms for postStartSetup
	I1018 10:28:54.213558  466848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-309062
	I1018 10:28:54.229987  466848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/config.json ...
	I1018 10:28:54.230274  466848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:28:54.230330  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:28:54.247021  466848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa Username:docker}
	I1018 10:28:54.349847  466848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:28:54.354180  466848 start.go:128] duration metric: took 11.504370404s to createHost
	I1018 10:28:54.354205  466848 start.go:83] releasing machines lock for "old-k8s-version-309062", held for 11.50453052s
	I1018 10:28:54.354278  466848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-309062
	I1018 10:28:54.372088  466848 ssh_runner.go:195] Run: cat /version.json
	I1018 10:28:54.372140  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:28:54.372409  466848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:28:54.372466  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:28:54.392526  466848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa Username:docker}
	I1018 10:28:54.394107  466848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa Username:docker}
	I1018 10:28:54.492814  466848 ssh_runner.go:195] Run: systemctl --version
	I1018 10:28:54.585790  466848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:28:54.621434  466848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:28:54.625884  466848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:28:54.626004  466848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:28:54.654317  466848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:28:54.654342  466848 start.go:495] detecting cgroup driver to use...
	I1018 10:28:54.654376  466848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:28:54.654435  466848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:28:54.673087  466848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:28:54.685731  466848 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:28:54.685794  466848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:28:54.707560  466848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:28:54.726434  466848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:28:54.849648  466848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:28:54.972533  466848 docker.go:234] disabling docker service ...
	I1018 10:28:54.972596  466848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:28:54.994219  466848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:28:55.008104  466848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:28:55.136902  466848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:28:55.265122  466848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:28:55.278727  466848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:28:55.292802  466848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 10:28:55.292916  466848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:28:55.301564  466848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:28:55.301663  466848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:28:55.311495  466848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:28:55.320838  466848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:28:55.330264  466848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:28:55.338615  466848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:28:55.347692  466848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:28:55.361309  466848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:28:55.370957  466848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:28:55.379159  466848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:28:55.387148  466848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:28:55.504943  466848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:28:55.636906  466848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:28:55.636976  466848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:28:55.641213  466848 start.go:563] Will wait 60s for crictl version
	I1018 10:28:55.641325  466848 ssh_runner.go:195] Run: which crictl
	I1018 10:28:55.644926  466848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:28:55.671047  466848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:28:55.671142  466848 ssh_runner.go:195] Run: crio --version
	I1018 10:28:55.698308  466848 ssh_runner.go:195] Run: crio --version
	I1018 10:28:55.731606  466848 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1018 10:28:55.734468  466848 cli_runner.go:164] Run: docker network inspect old-k8s-version-309062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:28:55.749824  466848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:28:55.753488  466848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:28:55.763023  466848 kubeadm.go:883] updating cluster {Name:old-k8s-version-309062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-309062 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:28:55.763144  466848 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 10:28:55.763200  466848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:28:55.798698  466848 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:28:55.798723  466848 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:28:55.798778  466848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:28:55.826139  466848 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:28:55.826165  466848 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:28:55.826173  466848 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1018 10:28:55.826267  466848 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-309062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-309062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:28:55.826358  466848 ssh_runner.go:195] Run: crio config
	I1018 10:28:55.890309  466848 cni.go:84] Creating CNI manager for ""
	I1018 10:28:55.890332  466848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:28:55.890392  466848 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:28:55.890435  466848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-309062 NodeName:old-k8s-version-309062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:28:55.890632  466848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-309062"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:28:55.890721  466848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 10:28:55.898627  466848 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:28:55.898701  466848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:28:55.906396  466848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 10:28:55.919707  466848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:28:55.934011  466848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1018 10:28:55.951943  466848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:28:55.955749  466848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:28:55.965470  466848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:28:56.084373  466848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:28:56.105588  466848 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062 for IP: 192.168.76.2
	I1018 10:28:56.105628  466848 certs.go:195] generating shared ca certs ...
	I1018 10:28:56.105644  466848 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:28:56.105807  466848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:28:56.105894  466848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:28:56.105919  466848 certs.go:257] generating profile certs ...
	I1018 10:28:56.105999  466848 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.key
	I1018 10:28:56.106017  466848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt with IP's: []
	I1018 10:28:56.482343  466848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt ...
	I1018 10:28:56.482375  466848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: {Name:mk2d5bac1d67b32df07fab623a6292e1f0ccb634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:28:56.482577  466848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.key ...
	I1018 10:28:56.482591  466848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.key: {Name:mkc60a6c5dfec4f404abb15dfd0dae4c1345178a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:28:56.482691  466848 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.key.e119c8f5
	I1018 10:28:56.482709  466848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.crt.e119c8f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 10:28:56.739185  466848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.crt.e119c8f5 ...
	I1018 10:28:56.739222  466848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.crt.e119c8f5: {Name:mk0cae24b214a12ad2d772dcc68355440984157e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:28:56.740004  466848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.key.e119c8f5 ...
	I1018 10:28:56.740064  466848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.key.e119c8f5: {Name:mkf3ded67d8c6fc1af2d1525d9bc79d7834dc395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:28:56.740271  466848 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.crt.e119c8f5 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.crt
	I1018 10:28:56.740433  466848 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.key.e119c8f5 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.key
	I1018 10:28:56.740592  466848 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/proxy-client.key
	I1018 10:28:56.740652  466848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/proxy-client.crt with IP's: []
	I1018 10:28:57.029375  466848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/proxy-client.crt ...
	I1018 10:28:57.029410  466848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/proxy-client.crt: {Name:mkd3f95321f3ff414120845081467d0c8add69f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:28:57.029617  466848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/proxy-client.key ...
	I1018 10:28:57.029630  466848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/proxy-client.key: {Name:mk6c62e72b2f924b31af099613a6115920282c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:28:57.029838  466848 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:28:57.029885  466848 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:28:57.029895  466848 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:28:57.029927  466848 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:28:57.029957  466848 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:28:57.029977  466848 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:28:57.030018  466848 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:28:57.030600  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:28:57.050472  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:28:57.069589  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:28:57.089057  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:28:57.113273  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 10:28:57.132314  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 10:28:57.150702  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:28:57.168508  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:28:57.186756  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:28:57.206862  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:28:57.224782  466848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:28:57.243052  466848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:28:57.256527  466848 ssh_runner.go:195] Run: openssl version
	I1018 10:28:57.263179  466848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:28:57.271389  466848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:28:57.275289  466848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:28:57.275401  466848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:28:57.316381  466848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:28:57.324656  466848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:28:57.333034  466848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:28:57.336810  466848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:28:57.336936  466848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:28:57.378486  466848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:28:57.387144  466848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:28:57.395727  466848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:28:57.399607  466848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:28:57.399699  466848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:28:57.441105  466848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:28:57.449834  466848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:28:57.453589  466848 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:28:57.453655  466848 kubeadm.go:400] StartCluster: {Name:old-k8s-version-309062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-309062 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:28:57.453728  466848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:28:57.453804  466848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:28:57.481633  466848 cri.go:89] found id: ""
	I1018 10:28:57.481714  466848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:28:57.489807  466848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:28:57.497664  466848 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:28:57.497772  466848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:28:57.505758  466848 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:28:57.505779  466848 kubeadm.go:157] found existing configuration files:
	
	I1018 10:28:57.505857  466848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 10:28:57.513708  466848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:28:57.513786  466848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:28:57.521392  466848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 10:28:57.529076  466848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:28:57.529147  466848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:28:57.536546  466848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 10:28:57.544532  466848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:28:57.544599  466848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:28:57.551898  466848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 10:28:57.559434  466848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:28:57.559530  466848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:28:57.566792  466848 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:28:57.611257  466848 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1018 10:28:57.611320  466848 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:28:57.676210  466848 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:28:57.676294  466848 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:28:57.676336  466848 kubeadm.go:318] OS: Linux
	I1018 10:28:57.676391  466848 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:28:57.676457  466848 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:28:57.676512  466848 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:28:57.676567  466848 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:28:57.676621  466848 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:28:57.676675  466848 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:28:57.676725  466848 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:28:57.676780  466848 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:28:57.676832  466848 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:28:57.761822  466848 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:28:57.761943  466848 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:28:57.762046  466848 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 10:28:57.903047  466848 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:28:57.906030  466848 out.go:252]   - Generating certificates and keys ...
	I1018 10:28:57.906124  466848 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:28:57.906200  466848 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:28:58.265534  466848 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:28:59.005719  466848 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:28:59.430691  466848 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:29:00.752333  466848 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:29:00.975003  466848 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:29:00.975308  466848 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-309062] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:29:01.262518  466848 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:29:01.262877  466848 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-309062] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:29:01.761101  466848 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:29:02.253341  466848 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:29:02.871055  466848 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:29:02.871340  466848 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:29:03.040919  466848 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:29:03.447107  466848 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:29:03.847287  466848 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:29:04.384981  466848 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:29:04.385740  466848 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:29:04.388595  466848 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:29:04.392596  466848 out.go:252]   - Booting up control plane ...
	I1018 10:29:04.392697  466848 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:29:04.392778  466848 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:29:04.393137  466848 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:29:04.420514  466848 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:29:04.422391  466848 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:29:04.422703  466848 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:29:04.559747  466848 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 10:29:12.562291  466848 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.003031 seconds
	I1018 10:29:12.562425  466848 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:29:12.582942  466848 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:29:13.115957  466848 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:29:13.116190  466848 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-309062 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:29:13.629729  466848 kubeadm.go:318] [bootstrap-token] Using token: c5yqiq.0tldprrsjv6bel4c
	I1018 10:29:13.632662  466848 out.go:252]   - Configuring RBAC rules ...
	I1018 10:29:13.632805  466848 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:29:13.638133  466848 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:29:13.657854  466848 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:29:13.663067  466848 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:29:13.667393  466848 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:29:13.674609  466848 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:29:13.688932  466848 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:29:13.980784  466848 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:29:14.050411  466848 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:29:14.051647  466848 kubeadm.go:318] 
	I1018 10:29:14.051728  466848 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:29:14.051738  466848 kubeadm.go:318] 
	I1018 10:29:14.051820  466848 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:29:14.051829  466848 kubeadm.go:318] 
	I1018 10:29:14.051856  466848 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:29:14.051922  466848 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:29:14.051980  466848 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:29:14.051988  466848 kubeadm.go:318] 
	I1018 10:29:14.052045  466848 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:29:14.052054  466848 kubeadm.go:318] 
	I1018 10:29:14.052105  466848 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:29:14.052114  466848 kubeadm.go:318] 
	I1018 10:29:14.052169  466848 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:29:14.052253  466848 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:29:14.052329  466848 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:29:14.052337  466848 kubeadm.go:318] 
	I1018 10:29:14.052426  466848 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:29:14.052511  466848 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:29:14.052519  466848 kubeadm.go:318] 
	I1018 10:29:14.052608  466848 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token c5yqiq.0tldprrsjv6bel4c \
	I1018 10:29:14.052721  466848 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:29:14.052746  466848 kubeadm.go:318] 	--control-plane 
	I1018 10:29:14.052755  466848 kubeadm.go:318] 
	I1018 10:29:14.052845  466848 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:29:14.052853  466848 kubeadm.go:318] 
	I1018 10:29:14.052940  466848 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token c5yqiq.0tldprrsjv6bel4c \
	I1018 10:29:14.053063  466848 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:29:14.056462  466848 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:29:14.056591  466848 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:29:14.056612  466848 cni.go:84] Creating CNI manager for ""
	I1018 10:29:14.056623  466848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:29:14.061783  466848 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:29:14.064737  466848 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:29:14.069703  466848 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1018 10:29:14.069722  466848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:29:14.103783  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:29:15.120528  466848 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.016700407s)
	I1018 10:29:15.120572  466848 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:29:15.120703  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:15.120788  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-309062 minikube.k8s.io/updated_at=2025_10_18T10_29_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=old-k8s-version-309062 minikube.k8s.io/primary=true
	I1018 10:29:15.339566  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:15.339642  466848 ops.go:34] apiserver oom_adj: -16
	I1018 10:29:15.839765  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:16.339669  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:16.840597  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:17.340280  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:17.840651  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:18.340654  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:18.840275  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:19.339842  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:19.839919  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:20.339766  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:20.840600  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:21.340540  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:21.840203  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:22.339682  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:22.840164  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:23.340215  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:23.840319  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:24.340247  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:24.840304  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:25.339888  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:25.840288  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:26.340446  466848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:29:26.523493  466848 kubeadm.go:1113] duration metric: took 11.402835858s to wait for elevateKubeSystemPrivileges
	I1018 10:29:26.523523  466848 kubeadm.go:402] duration metric: took 29.069872091s to StartCluster
	I1018 10:29:26.523540  466848 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:29:26.523602  466848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:29:26.524598  466848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:29:26.524807  466848 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:29:26.524916  466848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:29:26.525143  466848 config.go:182] Loaded profile config "old-k8s-version-309062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 10:29:26.525239  466848 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:29:26.525304  466848 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-309062"
	I1018 10:29:26.525318  466848 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-309062"
	I1018 10:29:26.525339  466848 host.go:66] Checking if "old-k8s-version-309062" exists ...
	I1018 10:29:26.525581  466848 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-309062"
	I1018 10:29:26.525604  466848 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-309062"
	I1018 10:29:26.525856  466848 cli_runner.go:164] Run: docker container inspect old-k8s-version-309062 --format={{.State.Status}}
	I1018 10:29:26.525882  466848 cli_runner.go:164] Run: docker container inspect old-k8s-version-309062 --format={{.State.Status}}
	I1018 10:29:26.528180  466848 out.go:179] * Verifying Kubernetes components...
	I1018 10:29:26.531269  466848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:29:26.561119  466848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:29:26.566710  466848 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:29:26.566733  466848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:29:26.566808  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:29:26.584104  466848 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-309062"
	I1018 10:29:26.584142  466848 host.go:66] Checking if "old-k8s-version-309062" exists ...
	I1018 10:29:26.584553  466848 cli_runner.go:164] Run: docker container inspect old-k8s-version-309062 --format={{.State.Status}}
	I1018 10:29:26.601322  466848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa Username:docker}
	I1018 10:29:26.622602  466848 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:29:26.622624  466848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:29:26.622682  466848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:29:26.651090  466848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa Username:docker}
	I1018 10:29:26.823128  466848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:29:26.838598  466848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:29:26.895123  466848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:29:26.929535  466848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:29:28.016903  466848 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.178226695s)
	I1018 10:29:28.016990  466848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.12183049s)
	I1018 10:29:28.018905  466848 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-309062" to be "Ready" ...
	I1018 10:29:28.020083  466848 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.196873708s)
	I1018 10:29:28.020143  466848 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 10:29:28.256947  466848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.327367093s)
	I1018 10:29:28.260008  466848 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 10:29:28.262961  466848 addons.go:514] duration metric: took 1.737710579s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1018 10:29:28.526040  466848 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-309062" context rescaled to 1 replicas
	W1018 10:29:30.029170  466848 node_ready.go:57] node "old-k8s-version-309062" has "Ready":"False" status (will retry)
	W1018 10:29:32.522506  466848 node_ready.go:57] node "old-k8s-version-309062" has "Ready":"False" status (will retry)
	W1018 10:29:35.022622  466848 node_ready.go:57] node "old-k8s-version-309062" has "Ready":"False" status (will retry)
	W1018 10:29:37.024442  466848 node_ready.go:57] node "old-k8s-version-309062" has "Ready":"False" status (will retry)
	W1018 10:29:39.522531  466848 node_ready.go:57] node "old-k8s-version-309062" has "Ready":"False" status (will retry)
	I1018 10:29:41.022358  466848 node_ready.go:49] node "old-k8s-version-309062" is "Ready"
	I1018 10:29:41.022395  466848 node_ready.go:38] duration metric: took 13.003280935s for node "old-k8s-version-309062" to be "Ready" ...
	I1018 10:29:41.022415  466848 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:29:41.022485  466848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:29:41.036434  466848 api_server.go:72] duration metric: took 14.511591906s to wait for apiserver process to appear ...
	I1018 10:29:41.036468  466848 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:29:41.036493  466848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:29:41.046719  466848 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 10:29:41.048389  466848 api_server.go:141] control plane version: v1.28.0
	I1018 10:29:41.048440  466848 api_server.go:131] duration metric: took 11.948867ms to wait for apiserver health ...
	I1018 10:29:41.048450  466848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:29:41.053078  466848 system_pods.go:59] 8 kube-system pods found
	I1018 10:29:41.053118  466848 system_pods.go:61] "coredns-5dd5756b68-4hhdr" [bc5203af-3c2b-48b1-a3a6-80d606975fd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:29:41.053129  466848 system_pods.go:61] "etcd-old-k8s-version-309062" [2b021053-cae1-4952-aa55-ef43252e67fa] Running
	I1018 10:29:41.053143  466848 system_pods.go:61] "kindnet-fqnmf" [99893a80-27ed-4abd-8b4e-5cda737b5c5f] Running
	I1018 10:29:41.053148  466848 system_pods.go:61] "kube-apiserver-old-k8s-version-309062" [c551afd7-1e02-4ae0-b0e0-1d093e2fd119] Running
	I1018 10:29:41.053156  466848 system_pods.go:61] "kube-controller-manager-old-k8s-version-309062" [8aecf8dc-3607-4a55-a120-d39e4ea25cbb] Running
	I1018 10:29:41.053161  466848 system_pods.go:61] "kube-proxy-xvwns" [de00d48f-5320-44a6-8cab-46be84cf20ec] Running
	I1018 10:29:41.053167  466848 system_pods.go:61] "kube-scheduler-old-k8s-version-309062" [50e7a1e5-6a7c-4c9d-87d7-f6f30ed1d5f0] Running
	I1018 10:29:41.053174  466848 system_pods.go:61] "storage-provisioner" [a42efbaf-4d67-4611-9aac-34048dc7b962] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:29:41.053203  466848 system_pods.go:74] duration metric: took 4.73393ms to wait for pod list to return data ...
	I1018 10:29:41.053212  466848 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:29:41.057457  466848 default_sa.go:45] found service account: "default"
	I1018 10:29:41.057482  466848 default_sa.go:55] duration metric: took 4.263789ms for default service account to be created ...
	I1018 10:29:41.057491  466848 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:29:41.061334  466848 system_pods.go:86] 8 kube-system pods found
	I1018 10:29:41.061370  466848 system_pods.go:89] "coredns-5dd5756b68-4hhdr" [bc5203af-3c2b-48b1-a3a6-80d606975fd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:29:41.061378  466848 system_pods.go:89] "etcd-old-k8s-version-309062" [2b021053-cae1-4952-aa55-ef43252e67fa] Running
	I1018 10:29:41.061385  466848 system_pods.go:89] "kindnet-fqnmf" [99893a80-27ed-4abd-8b4e-5cda737b5c5f] Running
	I1018 10:29:41.061389  466848 system_pods.go:89] "kube-apiserver-old-k8s-version-309062" [c551afd7-1e02-4ae0-b0e0-1d093e2fd119] Running
	I1018 10:29:41.061396  466848 system_pods.go:89] "kube-controller-manager-old-k8s-version-309062" [8aecf8dc-3607-4a55-a120-d39e4ea25cbb] Running
	I1018 10:29:41.061400  466848 system_pods.go:89] "kube-proxy-xvwns" [de00d48f-5320-44a6-8cab-46be84cf20ec] Running
	I1018 10:29:41.061405  466848 system_pods.go:89] "kube-scheduler-old-k8s-version-309062" [50e7a1e5-6a7c-4c9d-87d7-f6f30ed1d5f0] Running
	I1018 10:29:41.061411  466848 system_pods.go:89] "storage-provisioner" [a42efbaf-4d67-4611-9aac-34048dc7b962] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:29:41.061443  466848 retry.go:31] will retry after 200.923119ms: missing components: kube-dns
	I1018 10:29:41.266206  466848 system_pods.go:86] 8 kube-system pods found
	I1018 10:29:41.266243  466848 system_pods.go:89] "coredns-5dd5756b68-4hhdr" [bc5203af-3c2b-48b1-a3a6-80d606975fd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:29:41.266250  466848 system_pods.go:89] "etcd-old-k8s-version-309062" [2b021053-cae1-4952-aa55-ef43252e67fa] Running
	I1018 10:29:41.266255  466848 system_pods.go:89] "kindnet-fqnmf" [99893a80-27ed-4abd-8b4e-5cda737b5c5f] Running
	I1018 10:29:41.266260  466848 system_pods.go:89] "kube-apiserver-old-k8s-version-309062" [c551afd7-1e02-4ae0-b0e0-1d093e2fd119] Running
	I1018 10:29:41.266264  466848 system_pods.go:89] "kube-controller-manager-old-k8s-version-309062" [8aecf8dc-3607-4a55-a120-d39e4ea25cbb] Running
	I1018 10:29:41.266269  466848 system_pods.go:89] "kube-proxy-xvwns" [de00d48f-5320-44a6-8cab-46be84cf20ec] Running
	I1018 10:29:41.266273  466848 system_pods.go:89] "kube-scheduler-old-k8s-version-309062" [50e7a1e5-6a7c-4c9d-87d7-f6f30ed1d5f0] Running
	I1018 10:29:41.266281  466848 system_pods.go:89] "storage-provisioner" [a42efbaf-4d67-4611-9aac-34048dc7b962] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:29:41.266300  466848 retry.go:31] will retry after 372.513324ms: missing components: kube-dns
	I1018 10:29:41.643769  466848 system_pods.go:86] 8 kube-system pods found
	I1018 10:29:41.643804  466848 system_pods.go:89] "coredns-5dd5756b68-4hhdr" [bc5203af-3c2b-48b1-a3a6-80d606975fd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:29:41.643812  466848 system_pods.go:89] "etcd-old-k8s-version-309062" [2b021053-cae1-4952-aa55-ef43252e67fa] Running
	I1018 10:29:41.643818  466848 system_pods.go:89] "kindnet-fqnmf" [99893a80-27ed-4abd-8b4e-5cda737b5c5f] Running
	I1018 10:29:41.643822  466848 system_pods.go:89] "kube-apiserver-old-k8s-version-309062" [c551afd7-1e02-4ae0-b0e0-1d093e2fd119] Running
	I1018 10:29:41.643827  466848 system_pods.go:89] "kube-controller-manager-old-k8s-version-309062" [8aecf8dc-3607-4a55-a120-d39e4ea25cbb] Running
	I1018 10:29:41.643830  466848 system_pods.go:89] "kube-proxy-xvwns" [de00d48f-5320-44a6-8cab-46be84cf20ec] Running
	I1018 10:29:41.643834  466848 system_pods.go:89] "kube-scheduler-old-k8s-version-309062" [50e7a1e5-6a7c-4c9d-87d7-f6f30ed1d5f0] Running
	I1018 10:29:41.643839  466848 system_pods.go:89] "storage-provisioner" [a42efbaf-4d67-4611-9aac-34048dc7b962] Running
	I1018 10:29:41.643847  466848 system_pods.go:126] duration metric: took 586.34987ms to wait for k8s-apps to be running ...
	I1018 10:29:41.643858  466848 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:29:41.643911  466848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:29:41.661980  466848 system_svc.go:56] duration metric: took 18.111007ms WaitForService to wait for kubelet
	I1018 10:29:41.662007  466848 kubeadm.go:586] duration metric: took 15.137171211s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:29:41.662027  466848 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:29:41.665121  466848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:29:41.665162  466848 node_conditions.go:123] node cpu capacity is 2
	I1018 10:29:41.665175  466848 node_conditions.go:105] duration metric: took 3.142141ms to run NodePressure ...
	I1018 10:29:41.665215  466848 start.go:241] waiting for startup goroutines ...
	I1018 10:29:41.665223  466848 start.go:246] waiting for cluster config update ...
	I1018 10:29:41.665239  466848 start.go:255] writing updated cluster config ...
	I1018 10:29:41.665525  466848 ssh_runner.go:195] Run: rm -f paused
	I1018 10:29:41.669831  466848 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:29:41.674199  466848 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-4hhdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:42.680459  466848 pod_ready.go:94] pod "coredns-5dd5756b68-4hhdr" is "Ready"
	I1018 10:29:42.680489  466848 pod_ready.go:86] duration metric: took 1.006268762s for pod "coredns-5dd5756b68-4hhdr" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:42.683535  466848 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-309062" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:42.693385  466848 pod_ready.go:94] pod "etcd-old-k8s-version-309062" is "Ready"
	I1018 10:29:42.693412  466848 pod_ready.go:86] duration metric: took 9.851657ms for pod "etcd-old-k8s-version-309062" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:42.696578  466848 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-309062" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:42.701597  466848 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-309062" is "Ready"
	I1018 10:29:42.701625  466848 pod_ready.go:86] duration metric: took 5.019749ms for pod "kube-apiserver-old-k8s-version-309062" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:42.705406  466848 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-309062" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:42.878882  466848 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-309062" is "Ready"
	I1018 10:29:42.878923  466848 pod_ready.go:86] duration metric: took 173.488371ms for pod "kube-controller-manager-old-k8s-version-309062" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:43.078777  466848 pod_ready.go:83] waiting for pod "kube-proxy-xvwns" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:43.479303  466848 pod_ready.go:94] pod "kube-proxy-xvwns" is "Ready"
	I1018 10:29:43.479330  466848 pod_ready.go:86] duration metric: took 400.5265ms for pod "kube-proxy-xvwns" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:43.679252  466848 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-309062" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:44.078647  466848 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-309062" is "Ready"
	I1018 10:29:44.078688  466848 pod_ready.go:86] duration metric: took 399.399956ms for pod "kube-scheduler-old-k8s-version-309062" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:29:44.078700  466848 pod_ready.go:40] duration metric: took 2.408839899s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:29:44.142829  466848 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1018 10:29:44.145823  466848 out.go:203] 
	W1018 10:29:44.148754  466848 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 10:29:44.151571  466848 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 10:29:44.154442  466848 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-309062" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 10:29:41 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:41.170724048Z" level=info msg="Created container e1ca58868bdd9a7bd74bb4ac8b34ad53a121092cd74ad195cbc23352517ae32f: kube-system/coredns-5dd5756b68-4hhdr/coredns" id=2491a152-2856-4764-8357-a1cf22c0d7d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:29:41 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:41.171790565Z" level=info msg="Starting container: e1ca58868bdd9a7bd74bb4ac8b34ad53a121092cd74ad195cbc23352517ae32f" id=8cdbaa4e-3d42-490f-b365-c6c5f4212812 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:29:41 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:41.174202938Z" level=info msg="Started container" PID=1928 containerID=e1ca58868bdd9a7bd74bb4ac8b34ad53a121092cd74ad195cbc23352517ae32f description=kube-system/coredns-5dd5756b68-4hhdr/coredns id=8cdbaa4e-3d42-490f-b365-c6c5f4212812 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7efa4bc2bbd73c51c889eb7c802f3f3fafb82745e474868b191a0800a5797dc8
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.677474327Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9e319765-1d5a-42cf-9078-363497a77732 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.677542044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.684312361Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:64682c20f013010cf91d8f084e9d1d63b6f8546995a15f936ffc929f84eda21c UID:7e943026-3e85-454f-a324-37c76beb91b8 NetNS:/var/run/netns/cdd71b0c-1383-4897-b6ab-820acf39f7e3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002a02500}] Aliases:map[]}"
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.685623934Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.702926577Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:64682c20f013010cf91d8f084e9d1d63b6f8546995a15f936ffc929f84eda21c UID:7e943026-3e85-454f-a324-37c76beb91b8 NetNS:/var/run/netns/cdd71b0c-1383-4897-b6ab-820acf39f7e3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002a02500}] Aliases:map[]}"
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.703071948Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.708460463Z" level=info msg="Ran pod sandbox 64682c20f013010cf91d8f084e9d1d63b6f8546995a15f936ffc929f84eda21c with infra container: default/busybox/POD" id=9e319765-1d5a-42cf-9078-363497a77732 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.709667568Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ecb8422c-6c8a-48ca-b414-901e7b1efefb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.7097909Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ecb8422c-6c8a-48ca-b414-901e7b1efefb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.709829022Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ecb8422c-6c8a-48ca-b414-901e7b1efefb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.710648087Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cdfb3f7a-e358-4760-ab93-f285294c5dd5 name=/runtime.v1.ImageService/PullImage
	Oct 18 10:29:44 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:44.712701877Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 10:29:46 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:46.700296393Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=cdfb3f7a-e358-4760-ab93-f285294c5dd5 name=/runtime.v1.ImageService/PullImage
	Oct 18 10:29:46 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:46.701450082Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8942140-d7ce-407b-8c76-b7adcdcb2ce0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:29:46 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:46.70517151Z" level=info msg="Creating container: default/busybox/busybox" id=06baffcd-2dcc-4edb-89ff-c03d46278a5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:29:46 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:46.706082096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:29:46 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:46.712692904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:29:46 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:46.713272633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:29:46 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:46.730060294Z" level=info msg="Created container 5df631e02a8857030ae3fe5dd694ffb956390b4e2c63759ed83c5dfdc348859f: default/busybox/busybox" id=06baffcd-2dcc-4edb-89ff-c03d46278a5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:29:46 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:46.731029335Z" level=info msg="Starting container: 5df631e02a8857030ae3fe5dd694ffb956390b4e2c63759ed83c5dfdc348859f" id=74098572-c80f-4d27-9c0c-d749a90bd473 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:29:46 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:46.732553898Z" level=info msg="Started container" PID=1982 containerID=5df631e02a8857030ae3fe5dd694ffb956390b4e2c63759ed83c5dfdc348859f description=default/busybox/busybox id=74098572-c80f-4d27-9c0c-d749a90bd473 name=/runtime.v1.RuntimeService/StartContainer sandboxID=64682c20f013010cf91d8f084e9d1d63b6f8546995a15f936ffc929f84eda21c
	Oct 18 10:29:53 old-k8s-version-309062 crio[834]: time="2025-10-18T10:29:53.475153565Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	5df631e02a885       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   64682c20f0130       busybox                                          default
	e1ca58868bdd9       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   7efa4bc2bbd73       coredns-5dd5756b68-4hhdr                         kube-system
	9709c0d4c3176       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   60cc7022fd68a       storage-provisioner                              kube-system
	6213283b1e744       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   6492e5b8367df       kindnet-fqnmf                                    kube-system
	0ccc7e5c91c58       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   ec5d96095f9d0       kube-proxy-xvwns                                 kube-system
	d50a833cf23a1       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   8fed39c19ef99       kube-scheduler-old-k8s-version-309062            kube-system
	972274a3ef83f       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   5c1cd70afaa1d       kube-apiserver-old-k8s-version-309062            kube-system
	7a97bd1a8e46d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   de115d8bc982b       etcd-old-k8s-version-309062                      kube-system
	b5e87732d36a9       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   0d0b519d535f4       kube-controller-manager-old-k8s-version-309062   kube-system
	
	
	==> coredns [e1ca58868bdd9a7bd74bb4ac8b34ad53a121092cd74ad195cbc23352517ae32f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40441 - 37808 "HINFO IN 2132114309756830609.8608609571138447238. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032486681s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-309062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-309062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=old-k8s-version-309062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_29_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:29:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-309062
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:29:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:29:44 +0000   Sat, 18 Oct 2025 10:29:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:29:44 +0000   Sat, 18 Oct 2025 10:29:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:29:44 +0000   Sat, 18 Oct 2025 10:29:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:29:44 +0000   Sat, 18 Oct 2025 10:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-309062
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                98c6c6ec-8267-4a2c-858a-d465056e6aea
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-4hhdr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-309062                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-fqnmf                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-309062             250m (12%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-old-k8s-version-309062    200m (10%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-xvwns                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-309062             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-309062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-309062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-309062 event: Registered Node old-k8s-version-309062 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-309062 status is now: NodeReady
	
	
	==> dmesg <==
	[ +25.695966] overlayfs: idmapped layers are currently not supported
	[Oct18 10:05] overlayfs: idmapped layers are currently not supported
	[Oct18 10:10] overlayfs: idmapped layers are currently not supported
	[ +35.463301] overlayfs: idmapped layers are currently not supported
	[Oct18 10:11] overlayfs: idmapped layers are currently not supported
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7a97bd1a8e46d8bb82e80c2138323a62a61213bc47c7c5e31bfc1d793824ebad] <==
	{"level":"info","ts":"2025-10-18T10:29:06.488346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T10:29:06.488468Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T10:29:06.494773Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T10:29:06.495029Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T10:29:06.495184Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T10:29:06.504291Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T10:29:06.50487Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T10:29:07.141236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-18T10:29:07.141362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-18T10:29:07.141414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-18T10:29:07.141466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-18T10:29:07.141496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T10:29:07.141562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-18T10:29:07.141596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T10:29:07.144299Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T10:29:07.149376Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-309062 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T10:29:07.149474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T10:29:07.150505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T10:29:07.150692Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T10:29:07.15157Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T10:29:07.151924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T10:29:07.151976Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T10:29:07.152243Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T10:29:07.152349Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T10:29:07.1524Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 10:29:55 up  2:12,  0 user,  load average: 2.67, 3.54, 2.77
	Linux old-k8s-version-309062 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6213283b1e744fdee64b743b414affec6554774188ca64c0d4fbce4ebe2d84ba] <==
	I1018 10:29:29.915890       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:29:29.916123       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:29:29.916241       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:29:29.916260       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:29:29.916274       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:29:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:29:30.310325       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:29:30.316659       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:29:30.316742       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:29:30.317265       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 10:29:30.417368       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:29:30.417522       1 metrics.go:72] Registering metrics
	I1018 10:29:30.417606       1 controller.go:711] "Syncing nftables rules"
	I1018 10:29:40.220281       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:29:40.220319       1 main.go:301] handling current node
	I1018 10:29:50.217638       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:29:50.217671       1 main.go:301] handling current node
	
	
	==> kube-apiserver [972274a3ef83fc730ab28f067206f027c659a682c9f4b6e1e9d23aa07b89ffb4] <==
	I1018 10:29:10.559475       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 10:29:10.560404       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 10:29:10.563459       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 10:29:10.563501       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 10:29:10.563514       1 aggregator.go:166] initial CRD sync complete...
	I1018 10:29:10.563520       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 10:29:10.563525       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 10:29:10.563531       1 cache.go:39] Caches are synced for autoregister controller
	E1018 10:29:10.583825       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1018 10:29:10.801022       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:29:11.364345       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 10:29:11.369549       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 10:29:11.369571       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:29:12.113292       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:29:12.171147       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:29:12.290734       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 10:29:12.297951       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 10:29:12.299260       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 10:29:12.303826       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:29:12.521342       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 10:29:13.964487       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 10:29:13.979195       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 10:29:13.994469       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1018 10:29:26.178670       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 10:29:26.228899       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b5e87732d36a90985386b13ee9916dcb218ba727ba87bf1278df7c676dfc0642] <==
	I1018 10:29:25.508706       1 shared_informer.go:318] Caches are synced for disruption
	I1018 10:29:25.523509       1 shared_informer.go:318] Caches are synced for stateful set
	I1018 10:29:25.924233       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 10:29:25.924265       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 10:29:25.924625       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 10:29:26.184417       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1018 10:29:26.243070       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fqnmf"
	I1018 10:29:26.248389       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xvwns"
	I1018 10:29:26.416602       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wpk4q"
	I1018 10:29:26.429970       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-4hhdr"
	I1018 10:29:26.442083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="258.474829ms"
	I1018 10:29:26.484849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.708377ms"
	I1018 10:29:26.550932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.030191ms"
	I1018 10:29:26.551043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.359µs"
	I1018 10:29:28.128748       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1018 10:29:28.194752       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-wpk4q"
	I1018 10:29:28.219683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.701457ms"
	I1018 10:29:28.252162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.426699ms"
	I1018 10:29:28.252242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.893µs"
	I1018 10:29:40.737738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.274µs"
	I1018 10:29:40.758055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.058µs"
	I1018 10:29:41.330929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.627µs"
	I1018 10:29:42.334021       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.535822ms"
	I1018 10:29:42.334370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.432µs"
	I1018 10:29:45.405416       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0ccc7e5c91c58e9299dbb2c897bd83d4fcd1c40d7cdaa46af49624de1b0d8109] <==
	I1018 10:29:27.707891       1 server_others.go:69] "Using iptables proxy"
	I1018 10:29:27.732566       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 10:29:27.786567       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:29:27.798336       1 server_others.go:152] "Using iptables Proxier"
	I1018 10:29:27.798371       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 10:29:27.798378       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 10:29:27.798409       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 10:29:27.798658       1 server.go:846] "Version info" version="v1.28.0"
	I1018 10:29:27.798667       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:29:27.816146       1 config.go:97] "Starting endpoint slice config controller"
	I1018 10:29:27.816185       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 10:29:27.816407       1 config.go:188] "Starting service config controller"
	I1018 10:29:27.816413       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 10:29:27.816429       1 config.go:315] "Starting node config controller"
	I1018 10:29:27.816433       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 10:29:27.916640       1 shared_informer.go:318] Caches are synced for node config
	I1018 10:29:27.916681       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 10:29:27.916708       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d50a833cf23a134624becdbaaca24959617c9311d22b6ff993b7cf3b42147ed4] <==
	E1018 10:29:10.964890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 10:29:10.964859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1018 10:29:10.964799       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 10:29:10.964916       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 10:29:10.965016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 10:29:10.965071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 10:29:10.967410       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 10:29:10.967444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1018 10:29:10.967416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 10:29:10.967463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 10:29:10.967528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1018 10:29:10.967542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1018 10:29:10.967583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1018 10:29:10.967601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1018 10:29:10.967677       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 10:29:10.967732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1018 10:29:11.778020       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 10:29:11.778061       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 10:29:11.813771       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 10:29:11.813876       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 10:29:11.828989       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1018 10:29:11.829035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1018 10:29:12.102678       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1018 10:29:12.102718       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1018 10:29:14.454170       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: I1018 10:29:26.280572    1361 topology_manager.go:215] "Topology Admit Handler" podUID="de00d48f-5320-44a6-8cab-46be84cf20ec" podNamespace="kube-system" podName="kube-proxy-xvwns"
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: W1018 10:29:26.317078    1361 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-309062" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-309062' and this object
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: E1018 10:29:26.317305    1361 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-309062" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-309062' and this object
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: I1018 10:29:26.361755    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99893a80-27ed-4abd-8b4e-5cda737b5c5f-lib-modules\") pod \"kindnet-fqnmf\" (UID: \"99893a80-27ed-4abd-8b4e-5cda737b5c5f\") " pod="kube-system/kindnet-fqnmf"
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: I1018 10:29:26.361864    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8t8x\" (UniqueName: \"kubernetes.io/projected/99893a80-27ed-4abd-8b4e-5cda737b5c5f-kube-api-access-b8t8x\") pod \"kindnet-fqnmf\" (UID: \"99893a80-27ed-4abd-8b4e-5cda737b5c5f\") " pod="kube-system/kindnet-fqnmf"
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: I1018 10:29:26.361917    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhl62\" (UniqueName: \"kubernetes.io/projected/de00d48f-5320-44a6-8cab-46be84cf20ec-kube-api-access-dhl62\") pod \"kube-proxy-xvwns\" (UID: \"de00d48f-5320-44a6-8cab-46be84cf20ec\") " pod="kube-system/kube-proxy-xvwns"
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: I1018 10:29:26.361994    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99893a80-27ed-4abd-8b4e-5cda737b5c5f-xtables-lock\") pod \"kindnet-fqnmf\" (UID: \"99893a80-27ed-4abd-8b4e-5cda737b5c5f\") " pod="kube-system/kindnet-fqnmf"
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: I1018 10:29:26.362020    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de00d48f-5320-44a6-8cab-46be84cf20ec-kube-proxy\") pod \"kube-proxy-xvwns\" (UID: \"de00d48f-5320-44a6-8cab-46be84cf20ec\") " pod="kube-system/kube-proxy-xvwns"
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: I1018 10:29:26.362070    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/99893a80-27ed-4abd-8b4e-5cda737b5c5f-cni-cfg\") pod \"kindnet-fqnmf\" (UID: \"99893a80-27ed-4abd-8b4e-5cda737b5c5f\") " pod="kube-system/kindnet-fqnmf"
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: I1018 10:29:26.362102    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de00d48f-5320-44a6-8cab-46be84cf20ec-xtables-lock\") pod \"kube-proxy-xvwns\" (UID: \"de00d48f-5320-44a6-8cab-46be84cf20ec\") " pod="kube-system/kube-proxy-xvwns"
	Oct 18 10:29:26 old-k8s-version-309062 kubelet[1361]: I1018 10:29:26.362167    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de00d48f-5320-44a6-8cab-46be84cf20ec-lib-modules\") pod \"kube-proxy-xvwns\" (UID: \"de00d48f-5320-44a6-8cab-46be84cf20ec\") " pod="kube-system/kube-proxy-xvwns"
	Oct 18 10:29:28 old-k8s-version-309062 kubelet[1361]: I1018 10:29:28.305891    1361 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xvwns" podStartSLOduration=2.305846059 podCreationTimestamp="2025-10-18 10:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:29:28.303720442 +0000 UTC m=+14.383785463" watchObservedRunningTime="2025-10-18 10:29:28.305846059 +0000 UTC m=+14.385911072"
	Oct 18 10:29:34 old-k8s-version-309062 kubelet[1361]: I1018 10:29:34.152593    1361 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-fqnmf" podStartSLOduration=4.91902882 podCreationTimestamp="2025-10-18 10:29:26 +0000 UTC" firstStartedPulling="2025-10-18 10:29:26.605879787 +0000 UTC m=+12.685944800" lastFinishedPulling="2025-10-18 10:29:29.839383262 +0000 UTC m=+15.919448275" observedRunningTime="2025-10-18 10:29:30.283417984 +0000 UTC m=+16.363483013" watchObservedRunningTime="2025-10-18 10:29:34.152532295 +0000 UTC m=+20.232597316"
	Oct 18 10:29:40 old-k8s-version-309062 kubelet[1361]: I1018 10:29:40.695682    1361 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 18 10:29:40 old-k8s-version-309062 kubelet[1361]: I1018 10:29:40.734797    1361 topology_manager.go:215] "Topology Admit Handler" podUID="bc5203af-3c2b-48b1-a3a6-80d606975fd6" podNamespace="kube-system" podName="coredns-5dd5756b68-4hhdr"
	Oct 18 10:29:40 old-k8s-version-309062 kubelet[1361]: I1018 10:29:40.740303    1361 topology_manager.go:215] "Topology Admit Handler" podUID="a42efbaf-4d67-4611-9aac-34048dc7b962" podNamespace="kube-system" podName="storage-provisioner"
	Oct 18 10:29:40 old-k8s-version-309062 kubelet[1361]: I1018 10:29:40.882084    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc5203af-3c2b-48b1-a3a6-80d606975fd6-config-volume\") pod \"coredns-5dd5756b68-4hhdr\" (UID: \"bc5203af-3c2b-48b1-a3a6-80d606975fd6\") " pod="kube-system/coredns-5dd5756b68-4hhdr"
	Oct 18 10:29:40 old-k8s-version-309062 kubelet[1361]: I1018 10:29:40.882162    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj9lj\" (UniqueName: \"kubernetes.io/projected/a42efbaf-4d67-4611-9aac-34048dc7b962-kube-api-access-cj9lj\") pod \"storage-provisioner\" (UID: \"a42efbaf-4d67-4611-9aac-34048dc7b962\") " pod="kube-system/storage-provisioner"
	Oct 18 10:29:40 old-k8s-version-309062 kubelet[1361]: I1018 10:29:40.882194    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqw2m\" (UniqueName: \"kubernetes.io/projected/bc5203af-3c2b-48b1-a3a6-80d606975fd6-kube-api-access-cqw2m\") pod \"coredns-5dd5756b68-4hhdr\" (UID: \"bc5203af-3c2b-48b1-a3a6-80d606975fd6\") " pod="kube-system/coredns-5dd5756b68-4hhdr"
	Oct 18 10:29:40 old-k8s-version-309062 kubelet[1361]: I1018 10:29:40.882223    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a42efbaf-4d67-4611-9aac-34048dc7b962-tmp\") pod \"storage-provisioner\" (UID: \"a42efbaf-4d67-4611-9aac-34048dc7b962\") " pod="kube-system/storage-provisioner"
	Oct 18 10:29:41 old-k8s-version-309062 kubelet[1361]: I1018 10:29:41.332238    1361 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.33219459 podCreationTimestamp="2025-10-18 10:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:29:41.315703917 +0000 UTC m=+27.395768930" watchObservedRunningTime="2025-10-18 10:29:41.33219459 +0000 UTC m=+27.412259603"
	Oct 18 10:29:42 old-k8s-version-309062 kubelet[1361]: I1018 10:29:42.324670    1361 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4hhdr" podStartSLOduration=16.324622797 podCreationTimestamp="2025-10-18 10:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:29:41.33296678 +0000 UTC m=+27.413031793" watchObservedRunningTime="2025-10-18 10:29:42.324622797 +0000 UTC m=+28.404687810"
	Oct 18 10:29:44 old-k8s-version-309062 kubelet[1361]: I1018 10:29:44.375259    1361 topology_manager.go:215] "Topology Admit Handler" podUID="7e943026-3e85-454f-a324-37c76beb91b8" podNamespace="default" podName="busybox"
	Oct 18 10:29:44 old-k8s-version-309062 kubelet[1361]: I1018 10:29:44.501231    1361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdd2\" (UniqueName: \"kubernetes.io/projected/7e943026-3e85-454f-a324-37c76beb91b8-kube-api-access-xwdd2\") pod \"busybox\" (UID: \"7e943026-3e85-454f-a324-37c76beb91b8\") " pod="default/busybox"
	Oct 18 10:29:44 old-k8s-version-309062 kubelet[1361]: W1018 10:29:44.704772    1361 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/crio-64682c20f013010cf91d8f084e9d1d63b6f8546995a15f936ffc929f84eda21c WatchSource:0}: Error finding container 64682c20f013010cf91d8f084e9d1d63b6f8546995a15f936ffc929f84eda21c: Status 404 returned error can't find the container with id 64682c20f013010cf91d8f084e9d1d63b6f8546995a15f936ffc929f84eda21c
	
	
	==> storage-provisioner [9709c0d4c3176033cec08d0d5e5742f8b590df31db3799b976a1d4b9b17b98c2] <==
	I1018 10:29:41.136021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 10:29:41.174993       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:29:41.175767       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 10:29:41.198328       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:29:41.198572       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-309062_b450b362-685b-48aa-bc21-a1074de7b839!
	I1018 10:29:41.201876       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30bc5ecc-ff23-48f8-9195-73f60d25bbff", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-309062_b450b362-685b-48aa-bc21-a1074de7b839 became leader
	I1018 10:29:41.301264       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-309062_b450b362-685b-48aa-bc21-a1074de7b839!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-309062 -n old-k8s-version-309062
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-309062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-309062 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-309062 --alsologtostderr -v=1: exit status 80 (2.210924747s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-309062 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:31:13.318666  472868 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:31:13.318777  472868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:13.318787  472868 out.go:374] Setting ErrFile to fd 2...
	I1018 10:31:13.318792  472868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:13.319042  472868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:31:13.319282  472868 out.go:368] Setting JSON to false
	I1018 10:31:13.319328  472868 mustload.go:65] Loading cluster: old-k8s-version-309062
	I1018 10:31:13.319716  472868 config.go:182] Loaded profile config "old-k8s-version-309062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 10:31:13.320175  472868 cli_runner.go:164] Run: docker container inspect old-k8s-version-309062 --format={{.State.Status}}
	I1018 10:31:13.337300  472868 host.go:66] Checking if "old-k8s-version-309062" exists ...
	I1018 10:31:13.337614  472868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:31:13.397961  472868 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 10:31:13.387793656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:31:13.398617  472868 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-309062 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 10:31:13.402192  472868 out.go:179] * Pausing node old-k8s-version-309062 ... 
	I1018 10:31:13.405278  472868 host.go:66] Checking if "old-k8s-version-309062" exists ...
	I1018 10:31:13.405615  472868 ssh_runner.go:195] Run: systemctl --version
	I1018 10:31:13.405676  472868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-309062
	I1018 10:31:13.423273  472868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/old-k8s-version-309062/id_rsa Username:docker}
	I1018 10:31:13.529570  472868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:31:13.562287  472868 pause.go:52] kubelet running: true
	I1018 10:31:13.562369  472868 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:31:13.872438  472868 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:31:13.872518  472868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:31:13.962783  472868 cri.go:89] found id: "28b068be6ba6cf1e3afbc8ec9e6600adf10e615cd713b32937ec5dcd20863c29"
	I1018 10:31:13.962804  472868 cri.go:89] found id: "f3a8ae82b8c31103a9aa668ee614af5b2764449e218994b6d3ae42ddd5d15820"
	I1018 10:31:13.962808  472868 cri.go:89] found id: "6ae7dbec44172407905dbebfe46f720674c9f2a6f90db903589de445d94e3e52"
	I1018 10:31:13.962812  472868 cri.go:89] found id: "d4018cae69b9bfaf869931ec009785bb9133d02e8a0d3e946390b18d7dd19a77"
	I1018 10:31:13.962815  472868 cri.go:89] found id: "fe5b69f4fe8d8244c0c04923326903928bd6aa32735e98e900cef0e8929410f7"
	I1018 10:31:13.962818  472868 cri.go:89] found id: "cba131a162c9f2548c2ed732bf800e1f1257451692a681ba3d9bdb6f674084dc"
	I1018 10:31:13.962821  472868 cri.go:89] found id: "0cb67d3420bb2266844f350fcf9b4b39a84e2336671d33e7f75ac5c9327f4f9b"
	I1018 10:31:13.962824  472868 cri.go:89] found id: "c26f04e131bc5e297415a1e4c9e06a6a5e26b988a8b4b5335276049aefdc00d0"
	I1018 10:31:13.962827  472868 cri.go:89] found id: "a5dd6148e2e536dc32d642fcbf3fcb348930710fc9902f0cd2429867c75a933d"
	I1018 10:31:13.962836  472868 cri.go:89] found id: "df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5"
	I1018 10:31:13.962839  472868 cri.go:89] found id: "1221147b76c646921a654388aa40c79193a87c89eee11d8ab529f19b710f6028"
	I1018 10:31:13.962842  472868 cri.go:89] found id: ""
	I1018 10:31:13.962892  472868 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:31:13.974223  472868 retry.go:31] will retry after 328.404353ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:31:13Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:31:14.303741  472868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:31:14.317372  472868 pause.go:52] kubelet running: false
	I1018 10:31:14.317433  472868 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:31:14.556707  472868 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:31:14.556781  472868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:31:14.659866  472868 cri.go:89] found id: "28b068be6ba6cf1e3afbc8ec9e6600adf10e615cd713b32937ec5dcd20863c29"
	I1018 10:31:14.659886  472868 cri.go:89] found id: "f3a8ae82b8c31103a9aa668ee614af5b2764449e218994b6d3ae42ddd5d15820"
	I1018 10:31:14.659891  472868 cri.go:89] found id: "6ae7dbec44172407905dbebfe46f720674c9f2a6f90db903589de445d94e3e52"
	I1018 10:31:14.659895  472868 cri.go:89] found id: "d4018cae69b9bfaf869931ec009785bb9133d02e8a0d3e946390b18d7dd19a77"
	I1018 10:31:14.659899  472868 cri.go:89] found id: "fe5b69f4fe8d8244c0c04923326903928bd6aa32735e98e900cef0e8929410f7"
	I1018 10:31:14.659907  472868 cri.go:89] found id: "cba131a162c9f2548c2ed732bf800e1f1257451692a681ba3d9bdb6f674084dc"
	I1018 10:31:14.659911  472868 cri.go:89] found id: "0cb67d3420bb2266844f350fcf9b4b39a84e2336671d33e7f75ac5c9327f4f9b"
	I1018 10:31:14.659914  472868 cri.go:89] found id: "c26f04e131bc5e297415a1e4c9e06a6a5e26b988a8b4b5335276049aefdc00d0"
	I1018 10:31:14.659917  472868 cri.go:89] found id: "a5dd6148e2e536dc32d642fcbf3fcb348930710fc9902f0cd2429867c75a933d"
	I1018 10:31:14.659923  472868 cri.go:89] found id: "df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5"
	I1018 10:31:14.659926  472868 cri.go:89] found id: "1221147b76c646921a654388aa40c79193a87c89eee11d8ab529f19b710f6028"
	I1018 10:31:14.659929  472868 cri.go:89] found id: ""
	I1018 10:31:14.659980  472868 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:31:14.672257  472868 retry.go:31] will retry after 346.816562ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:31:14Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:31:15.019824  472868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:31:15.069165  472868 pause.go:52] kubelet running: false
	I1018 10:31:15.069251  472868 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:31:15.322324  472868 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:31:15.322391  472868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:31:15.442669  472868 cri.go:89] found id: "28b068be6ba6cf1e3afbc8ec9e6600adf10e615cd713b32937ec5dcd20863c29"
	I1018 10:31:15.442687  472868 cri.go:89] found id: "f3a8ae82b8c31103a9aa668ee614af5b2764449e218994b6d3ae42ddd5d15820"
	I1018 10:31:15.442691  472868 cri.go:89] found id: "6ae7dbec44172407905dbebfe46f720674c9f2a6f90db903589de445d94e3e52"
	I1018 10:31:15.442695  472868 cri.go:89] found id: "d4018cae69b9bfaf869931ec009785bb9133d02e8a0d3e946390b18d7dd19a77"
	I1018 10:31:15.442699  472868 cri.go:89] found id: "fe5b69f4fe8d8244c0c04923326903928bd6aa32735e98e900cef0e8929410f7"
	I1018 10:31:15.442702  472868 cri.go:89] found id: "cba131a162c9f2548c2ed732bf800e1f1257451692a681ba3d9bdb6f674084dc"
	I1018 10:31:15.442706  472868 cri.go:89] found id: "0cb67d3420bb2266844f350fcf9b4b39a84e2336671d33e7f75ac5c9327f4f9b"
	I1018 10:31:15.442709  472868 cri.go:89] found id: "c26f04e131bc5e297415a1e4c9e06a6a5e26b988a8b4b5335276049aefdc00d0"
	I1018 10:31:15.442713  472868 cri.go:89] found id: "a5dd6148e2e536dc32d642fcbf3fcb348930710fc9902f0cd2429867c75a933d"
	I1018 10:31:15.442719  472868 cri.go:89] found id: "df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5"
	I1018 10:31:15.442722  472868 cri.go:89] found id: "1221147b76c646921a654388aa40c79193a87c89eee11d8ab529f19b710f6028"
	I1018 10:31:15.442725  472868 cri.go:89] found id: ""
	I1018 10:31:15.442769  472868 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:31:15.457913  472868 out.go:203] 
	W1018 10:31:15.460916  472868 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:31:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:31:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 10:31:15.460938  472868 out.go:285] * 
	* 
	W1018 10:31:15.468254  472868 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 10:31:15.471888  472868 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-309062 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-309062
helpers_test.go:243: (dbg) docker inspect old-k8s-version-309062:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750",
	        "Created": "2025-10-18T10:28:48.73837051Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470547,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:30:08.554919373Z",
	            "FinishedAt": "2025-10-18T10:30:07.717738965Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/hostname",
	        "HostsPath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/hosts",
	        "LogPath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750-json.log",
	        "Name": "/old-k8s-version-309062",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-309062:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-309062",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750",
	                "LowerDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656/merged",
	                "UpperDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656/diff",
	                "WorkDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-309062",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-309062/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-309062",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-309062",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-309062",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f85fd6f80e49d2a568ed20f0d7633966608aa833dd63afb6827f09acf4992782",
	            "SandboxKey": "/var/run/docker/netns/f85fd6f80e49",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-309062": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:59:28:a5:e4:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "082c8a75e8eb3b8d93bfcaf0e7df425e066e901e2d22d2638140f1c9d2501c82",
	                    "EndpointID": "772e315ccce4b0ba90a5e85294dffa9bdc7ef2f2357aecc1087845c5dad38089",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-309062",
	                        "ef75e2f86668"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309062 -n old-k8s-version-309062
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309062 -n old-k8s-version-309062: exit status 2 (435.642114ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-309062 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-309062 logs -n 25: (2.656068533s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-881658 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo containerd config dump                                                                                                                                                                                                  │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo crio config                                                                                                                                                                                                             │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ delete  │ -p cilium-881658                                                                                                                                                                                                                              │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:27 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-733799   │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-360583                                                                                                                                                                                                                   │ force-systemd-env-360583 │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-233372 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-233372 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-233372 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-233372                                                                                                                                                                                                                        │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │                     │
	│ stop    │ -p old-k8s-version-309062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │ 18 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-309062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-733799   │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	│ image   │ old-k8s-version-309062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-309062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:31:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:31:06.940374  472598 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:31:06.940504  472598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:06.940508  472598 out.go:374] Setting ErrFile to fd 2...
	I1018 10:31:06.940512  472598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:06.940819  472598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:31:06.941578  472598 out.go:368] Setting JSON to false
	I1018 10:31:06.943138  472598 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8017,"bootTime":1760775450,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:31:06.943221  472598 start.go:141] virtualization:  
	I1018 10:31:06.946802  472598 out.go:179] * [cert-expiration-733799] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:31:06.951662  472598 notify.go:220] Checking for updates...
	I1018 10:31:06.955280  472598 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:31:06.958365  472598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:31:06.961276  472598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:31:06.964120  472598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:31:06.967068  472598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:31:06.969914  472598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:31:06.973371  472598 config.go:182] Loaded profile config "cert-expiration-733799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:06.974120  472598 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:31:07.004620  472598 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:31:07.004757  472598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:31:07.066716  472598 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 10:31:07.057023295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:31:07.066812  472598 docker.go:318] overlay module found
	I1018 10:31:07.070174  472598 out.go:179] * Using the docker driver based on existing profile
	I1018 10:31:07.073086  472598 start.go:305] selected driver: docker
	I1018 10:31:07.073096  472598 start.go:925] validating driver "docker" against &{Name:cert-expiration-733799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-733799 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:07.073272  472598 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:31:07.074005  472598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:31:07.139257  472598 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 10:31:07.129137024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:31:07.139585  472598 cni.go:84] Creating CNI manager for ""
	I1018 10:31:07.139644  472598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:31:07.139684  472598 start.go:349] cluster config:
	{Name:cert-expiration-733799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-733799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:07.142871  472598 out.go:179] * Starting "cert-expiration-733799" primary control-plane node in "cert-expiration-733799" cluster
	I1018 10:31:07.145742  472598 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:31:07.148714  472598 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:31:07.151831  472598 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:07.151866  472598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:31:07.151885  472598 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:31:07.151895  472598 cache.go:58] Caching tarball of preloaded images
	I1018 10:31:07.151986  472598 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:31:07.151995  472598 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:31:07.152117  472598 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/config.json ...
	I1018 10:31:07.175761  472598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:31:07.175773  472598 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:31:07.175793  472598 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:31:07.175815  472598 start.go:360] acquireMachinesLock for cert-expiration-733799: {Name:mk4e0847b4c10db23105e96816f6db85cd8efa9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:31:07.175877  472598 start.go:364] duration metric: took 46.097µs to acquireMachinesLock for "cert-expiration-733799"
	I1018 10:31:07.175896  472598 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:31:07.175900  472598 fix.go:54] fixHost starting: 
	I1018 10:31:07.176174  472598 cli_runner.go:164] Run: docker container inspect cert-expiration-733799 --format={{.State.Status}}
	I1018 10:31:07.193271  472598 fix.go:112] recreateIfNeeded on cert-expiration-733799: state=Running err=<nil>
	W1018 10:31:07.193291  472598 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 10:31:07.196467  472598 out.go:252] * Updating the running docker "cert-expiration-733799" container ...
	I1018 10:31:07.196502  472598 machine.go:93] provisionDockerMachine start ...
	I1018 10:31:07.196579  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:07.214592  472598 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:07.214908  472598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1018 10:31:07.214915  472598 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:31:07.364345  472598 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-733799
	
	I1018 10:31:07.364370  472598 ubuntu.go:182] provisioning hostname "cert-expiration-733799"
	I1018 10:31:07.364461  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:07.384871  472598 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:07.385388  472598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1018 10:31:07.385400  472598 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-733799 && echo "cert-expiration-733799" | sudo tee /etc/hostname
	I1018 10:31:07.547479  472598 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-733799
	
	I1018 10:31:07.547569  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:07.566709  472598 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:07.567013  472598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1018 10:31:07.567028  472598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-733799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-733799/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-733799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:31:07.717600  472598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:31:07.717625  472598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:31:07.717654  472598 ubuntu.go:190] setting up certificates
	I1018 10:31:07.717663  472598 provision.go:84] configureAuth start
	I1018 10:31:07.717720  472598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-733799
	I1018 10:31:07.736842  472598 provision.go:143] copyHostCerts
	I1018 10:31:07.736909  472598 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:31:07.736929  472598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:31:07.737005  472598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:31:07.737098  472598 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:31:07.737102  472598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:31:07.737124  472598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:31:07.737171  472598 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:31:07.737174  472598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:31:07.737350  472598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:31:07.737412  472598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-733799 san=[127.0.0.1 192.168.85.2 cert-expiration-733799 localhost minikube]
	I1018 10:31:08.124478  472598 provision.go:177] copyRemoteCerts
	I1018 10:31:08.124538  472598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:31:08.124590  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:08.144194  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:08.250016  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:31:08.269316  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 10:31:08.289794  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:31:08.309253  472598 provision.go:87] duration metric: took 591.570359ms to configureAuth
	I1018 10:31:08.309271  472598 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:31:08.309454  472598 config.go:182] Loaded profile config "cert-expiration-733799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:08.309561  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:08.327205  472598 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:08.327518  472598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1018 10:31:08.327537  472598 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:31:13.732325  472598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:31:13.732338  472598 machine.go:96] duration metric: took 6.535830085s to provisionDockerMachine
	I1018 10:31:13.732347  472598 start.go:293] postStartSetup for "cert-expiration-733799" (driver="docker")
	I1018 10:31:13.732357  472598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:31:13.732427  472598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:31:13.732464  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:13.761282  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:13.876578  472598 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:31:13.880509  472598 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:31:13.880526  472598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:31:13.880562  472598 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:31:13.880624  472598 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:31:13.880703  472598 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:31:13.880800  472598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:31:13.888965  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:13.907546  472598 start.go:296] duration metric: took 175.182948ms for postStartSetup
	I1018 10:31:13.907633  472598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:31:13.907672  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:13.928848  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:14.035342  472598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:31:14.040700  472598 fix.go:56] duration metric: took 6.864792216s for fixHost
	I1018 10:31:14.040715  472598 start.go:83] releasing machines lock for "cert-expiration-733799", held for 6.864831412s
	I1018 10:31:14.040784  472598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-733799
	I1018 10:31:14.058794  472598 ssh_runner.go:195] Run: cat /version.json
	I1018 10:31:14.058846  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:14.059100  472598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:31:14.059150  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:14.079349  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:14.080869  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:14.299432  472598 ssh_runner.go:195] Run: systemctl --version
	I1018 10:31:14.306373  472598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:31:14.404276  472598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:31:14.414048  472598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:31:14.414133  472598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:31:14.424208  472598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:31:14.424221  472598 start.go:495] detecting cgroup driver to use...
	I1018 10:31:14.424251  472598 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:31:14.424295  472598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:31:14.459427  472598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:31:14.474433  472598 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:31:14.474487  472598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:31:14.491215  472598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:31:14.506292  472598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:31:14.678665  472598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:31:14.826999  472598 docker.go:234] disabling docker service ...
	I1018 10:31:14.827065  472598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:31:14.842394  472598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:31:14.855825  472598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:31:15.004836  472598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:31:15.209168  472598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:31:15.223605  472598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:31:15.239534  472598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:31:15.239598  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.252769  472598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:31:15.252825  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.263685  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.280108  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.293455  472598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:31:15.301720  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.311990  472598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.321335  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.331446  472598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:31:15.343957  472598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:31:15.351873  472598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:15.531270  472598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:31:15.751258  472598 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:31:15.751315  472598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:31:15.755237  472598 start.go:563] Will wait 60s for crictl version
	I1018 10:31:15.755291  472598 ssh_runner.go:195] Run: which crictl
	I1018 10:31:15.758690  472598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:31:15.792587  472598 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:31:15.792688  472598 ssh_runner.go:195] Run: crio --version
	I1018 10:31:15.833058  472598 ssh_runner.go:195] Run: crio --version
	I1018 10:31:15.875645  472598 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.430259892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.436797035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.437565714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.452968552Z" level=info msg="Created container df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb/dashboard-metrics-scraper" id=0c588605-5b5f-41f8-ac1a-4daea9878635 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.453968019Z" level=info msg="Starting container: df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5" id=d2ef1f92-06b9-453e-9137-dbf55e3b5837 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.457715511Z" level=info msg="Started container" PID=1634 containerID=df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb/dashboard-metrics-scraper id=d2ef1f92-06b9-453e-9137-dbf55e3b5837 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c002591376b626cf1d60e09c810a299f70d9961d351a962d92482ea5359d7697
	Oct 18 10:30:56 old-k8s-version-309062 conmon[1632]: conmon df2bceb48eb6286384fe <ninfo>: container 1634 exited with status 1
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.650552286Z" level=info msg="Removing container: e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0" id=4c5be669-0125-42a6-9642-916696aa8f4b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.658634532Z" level=info msg="Error loading conmon cgroup of container e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0: cgroup deleted" id=4c5be669-0125-42a6-9642-916696aa8f4b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.662140086Z" level=info msg="Removed container e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb/dashboard-metrics-scraper" id=4c5be669-0125-42a6-9642-916696aa8f4b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.626199321Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.630374444Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.630410752Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.630434498Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.633795108Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.633831293Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.633856622Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.637469008Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.637504274Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.637530284Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.64079957Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.640834918Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.640862521Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.644161107Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.644196127Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	df2bceb48eb62       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   c002591376b62       dashboard-metrics-scraper-5f989dc9cf-zglwb       kubernetes-dashboard
	28b068be6ba6c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   2296118fb243c       storage-provisioner                              kube-system
	1221147b76c64       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   babf1ea1dded5       kubernetes-dashboard-8694d4445c-gt5x2            kubernetes-dashboard
	06872f9d98ba5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   fdb0ba21dcf40       busybox                                          default
	f3a8ae82b8c31       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   3214cf0c2cacd       coredns-5dd5756b68-4hhdr                         kube-system
	6ae7dbec44172       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   63246fe301f39       kindnet-fqnmf                                    kube-system
	d4018cae69b9b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   4e26ed2776cec       kube-proxy-xvwns                                 kube-system
	fe5b69f4fe8d8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   2296118fb243c       storage-provisioner                              kube-system
	cba131a162c9f       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   e0a856d26b336       kube-controller-manager-old-k8s-version-309062   kube-system
	0cb67d3420bb2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   d947735e30e55       etcd-old-k8s-version-309062                      kube-system
	c26f04e131bc5       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   6eb03bbab5abd       kube-scheduler-old-k8s-version-309062            kube-system
	a5dd6148e2e53       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   ade6551731682       kube-apiserver-old-k8s-version-309062            kube-system
	
	
	==> coredns [f3a8ae82b8c31103a9aa668ee614af5b2764449e218994b6d3ae42ddd5d15820] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50728 - 31986 "HINFO IN 7065832067986444152.1021077917737199552. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021954631s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-309062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-309062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=old-k8s-version-309062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_29_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:29:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-309062
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:31:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:30:51 +0000   Sat, 18 Oct 2025 10:29:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:30:51 +0000   Sat, 18 Oct 2025 10:29:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:30:51 +0000   Sat, 18 Oct 2025 10:29:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:30:51 +0000   Sat, 18 Oct 2025 10:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-309062
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                98c6c6ec-8267-4a2c-858a-d465056e6aea
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-4hhdr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-309062                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-fqnmf                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-309062             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-309062    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-xvwns                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-309062             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-zglwb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gt5x2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-309062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-309062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-309062 event: Registered Node old-k8s-version-309062 in Controller
	  Normal  NodeReady                97s                    kubelet          Node old-k8s-version-309062 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-309062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-309062 event: Registered Node old-k8s-version-309062 in Controller
	
	
	==> dmesg <==
	[Oct18 10:05] overlayfs: idmapped layers are currently not supported
	[Oct18 10:10] overlayfs: idmapped layers are currently not supported
	[ +35.463301] overlayfs: idmapped layers are currently not supported
	[Oct18 10:11] overlayfs: idmapped layers are currently not supported
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0cb67d3420bb2266844f350fcf9b4b39a84e2336671d33e7f75ac5c9327f4f9b] <==
	{"level":"info","ts":"2025-10-18T10:30:16.338722Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T10:30:16.338817Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T10:30:16.339109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T10:30:16.339622Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T10:30:16.340605Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T10:30:16.341383Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T10:30:16.356452Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T10:30:16.357686Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T10:30:16.375885Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T10:30:16.424835Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T10:30:16.427465Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T10:30:17.334693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T10:30:17.334806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T10:30:17.334843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T10:30:17.334881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T10:30:17.334927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T10:30:17.334968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T10:30:17.334998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T10:30:17.339511Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-309062 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T10:30:17.339605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T10:30:17.3406Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T10:30:17.340855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T10:30:17.341797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T10:30:17.345225Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T10:30:17.345312Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:31:17 up  2:13,  0 user,  load average: 2.98, 3.55, 2.85
	Linux old-k8s-version-309062 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ae7dbec44172407905dbebfe46f720674c9f2a6f90db903589de445d94e3e52] <==
	I1018 10:30:22.416248       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:30:22.425485       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:30:22.425688       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:30:22.425701       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:30:22.425715       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:30:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:30:22.624117       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:30:22.631098       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:30:22.631239       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:30:22.631466       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:30:52.624190       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 10:30:52.633015       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:30:52.710488       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 10:30:52.711495       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1018 10:30:54.131692       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:30:54.131720       1 metrics.go:72] Registering metrics
	I1018 10:30:54.131771       1 controller.go:711] "Syncing nftables rules"
	I1018 10:31:02.625889       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:31:02.625947       1 main.go:301] handling current node
	I1018 10:31:12.629430       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:31:12.629466       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5dd6148e2e536dc32d642fcbf3fcb348930710fc9902f0cd2429867c75a933d] <==
	I1018 10:30:20.150477       1 controller.go:78] Starting OpenAPI AggregationController
	I1018 10:30:20.542134       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 10:30:20.542207       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 10:30:20.548384       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 10:30:20.548703       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 10:30:20.549747       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 10:30:20.549771       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 10:30:20.550081       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 10:30:20.550125       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 10:30:20.551737       1 aggregator.go:166] initial CRD sync complete...
	I1018 10:30:20.551762       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 10:30:20.551769       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 10:30:20.551776       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:30:20.605980       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:30:21.281551       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:30:23.120716       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 10:30:23.180379       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 10:30:23.217551       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:30:23.230857       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:30:23.244594       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 10:30:23.351496       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.204.122"}
	I1018 10:30:23.405061       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.161.76"}
	I1018 10:30:33.291191       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 10:30:33.318799       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 10:30:33.355184       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cba131a162c9f2548c2ed732bf800e1f1257451692a681ba3d9bdb6f674084dc] <==
	I1018 10:30:33.342804       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-zglwb"
	I1018 10:30:33.342835       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-gt5x2"
	I1018 10:30:33.372877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.286242ms"
	I1018 10:30:33.374069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.945364ms"
	I1018 10:30:33.377898       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1018 10:30:33.387751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.590534ms"
	I1018 10:30:33.388001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.687µs"
	I1018 10:30:33.396524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.192µs"
	I1018 10:30:33.406518       1 shared_informer.go:318] Caches are synced for attach detach
	I1018 10:30:33.407688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.75628ms"
	I1018 10:30:33.407766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.864µs"
	I1018 10:30:33.411222       1 shared_informer.go:318] Caches are synced for persistent volume
	I1018 10:30:33.432242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="39.967µs"
	I1018 10:30:33.809664       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 10:30:33.832082       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 10:30:33.832113       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 10:30:38.632298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.147272ms"
	I1018 10:30:38.632537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.794µs"
	I1018 10:30:42.632613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="269.672µs"
	I1018 10:30:43.637116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.381µs"
	I1018 10:30:44.632359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.23µs"
	I1018 10:30:56.671691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.614µs"
	I1018 10:30:58.673432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.806962ms"
	I1018 10:30:58.673609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.081µs"
	I1018 10:31:03.687141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.498µs"
	
	
	==> kube-proxy [d4018cae69b9bfaf869931ec009785bb9133d02e8a0d3e946390b18d7dd19a77] <==
	I1018 10:30:23.247671       1 server_others.go:69] "Using iptables proxy"
	I1018 10:30:23.282843       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 10:30:23.429035       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:30:23.432454       1 server_others.go:152] "Using iptables Proxier"
	I1018 10:30:23.432552       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 10:30:23.432596       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 10:30:23.432650       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 10:30:23.432886       1 server.go:846] "Version info" version="v1.28.0"
	I1018 10:30:23.433152       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:30:23.434687       1 config.go:188] "Starting service config controller"
	I1018 10:30:23.434773       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 10:30:23.434819       1 config.go:97] "Starting endpoint slice config controller"
	I1018 10:30:23.434845       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 10:30:23.436161       1 config.go:315] "Starting node config controller"
	I1018 10:30:23.436228       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 10:30:23.534965       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 10:30:23.535023       1 shared_informer.go:318] Caches are synced for service config
	I1018 10:30:23.536436       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c26f04e131bc5e297415a1e4c9e06a6a5e26b988a8b4b5335276049aefdc00d0] <==
	I1018 10:30:19.866527       1 serving.go:348] Generated self-signed cert in-memory
	I1018 10:30:23.352314       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 10:30:23.352394       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:30:23.358344       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 10:30:23.358539       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1018 10:30:23.358605       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1018 10:30:23.358658       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 10:30:23.364942       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:30:23.365724       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 10:30:23.365815       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:30:23.365884       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1018 10:30:23.460413       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1018 10:30:23.469208       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1018 10:30:23.469320       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: I1018 10:30:33.398320     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fa2d2419-2697-4b0f-8b80-c51fb742e12c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-gt5x2\" (UID: \"fa2d2419-2697-4b0f-8b80-c51fb742e12c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gt5x2"
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: I1018 10:30:33.398430     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a1b11f6f-40ca-4dc0-a12b-a7af607494ea-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-zglwb\" (UID: \"a1b11f6f-40ca-4dc0-a12b-a7af607494ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb"
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: I1018 10:30:33.398543     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9p8\" (UniqueName: \"kubernetes.io/projected/fa2d2419-2697-4b0f-8b80-c51fb742e12c-kube-api-access-gr9p8\") pod \"kubernetes-dashboard-8694d4445c-gt5x2\" (UID: \"fa2d2419-2697-4b0f-8b80-c51fb742e12c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gt5x2"
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: W1018 10:30:33.689803     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/crio-babf1ea1dded50b4ee9573c2d6e6f864e2dc2d86f80674c6a6b7211c8a43d65b WatchSource:0}: Error finding container babf1ea1dded50b4ee9573c2d6e6f864e2dc2d86f80674c6a6b7211c8a43d65b: Status 404 returned error can't find the container with id babf1ea1dded50b4ee9573c2d6e6f864e2dc2d86f80674c6a6b7211c8a43d65b
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: W1018 10:30:33.702904     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/crio-c002591376b626cf1d60e09c810a299f70d9961d351a962d92482ea5359d7697 WatchSource:0}: Error finding container c002591376b626cf1d60e09c810a299f70d9961d351a962d92482ea5359d7697: Status 404 returned error can't find the container with id c002591376b626cf1d60e09c810a299f70d9961d351a962d92482ea5359d7697
	Oct 18 10:30:38 old-k8s-version-309062 kubelet[776]: I1018 10:30:38.614459     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gt5x2" podStartSLOduration=1.244251068 podCreationTimestamp="2025-10-18 10:30:33 +0000 UTC" firstStartedPulling="2025-10-18 10:30:33.694180289 +0000 UTC m=+18.422322981" lastFinishedPulling="2025-10-18 10:30:38.064313132 +0000 UTC m=+22.792455832" observedRunningTime="2025-10-18 10:30:38.613841523 +0000 UTC m=+23.341984223" watchObservedRunningTime="2025-10-18 10:30:38.614383919 +0000 UTC m=+23.342526611"
	Oct 18 10:30:42 old-k8s-version-309062 kubelet[776]: I1018 10:30:42.608330     776 scope.go:117] "RemoveContainer" containerID="3c21f9ad39ad423d635fa43d8991784cc90226d5d254f907fa623a595c613683"
	Oct 18 10:30:43 old-k8s-version-309062 kubelet[776]: I1018 10:30:43.612254     776 scope.go:117] "RemoveContainer" containerID="3c21f9ad39ad423d635fa43d8991784cc90226d5d254f907fa623a595c613683"
	Oct 18 10:30:43 old-k8s-version-309062 kubelet[776]: I1018 10:30:43.612777     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:43 old-k8s-version-309062 kubelet[776]: E1018 10:30:43.613099     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:30:44 old-k8s-version-309062 kubelet[776]: I1018 10:30:44.616683     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:44 old-k8s-version-309062 kubelet[776]: E1018 10:30:44.616995     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:30:45 old-k8s-version-309062 kubelet[776]: I1018 10:30:45.618501     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:45 old-k8s-version-309062 kubelet[776]: E1018 10:30:45.618810     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:30:53 old-k8s-version-309062 kubelet[776]: I1018 10:30:53.638337     776 scope.go:117] "RemoveContainer" containerID="fe5b69f4fe8d8244c0c04923326903928bd6aa32735e98e900cef0e8929410f7"
	Oct 18 10:30:56 old-k8s-version-309062 kubelet[776]: I1018 10:30:56.427404     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:56 old-k8s-version-309062 kubelet[776]: I1018 10:30:56.648644     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:56 old-k8s-version-309062 kubelet[776]: I1018 10:30:56.648947     776 scope.go:117] "RemoveContainer" containerID="df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5"
	Oct 18 10:30:56 old-k8s-version-309062 kubelet[776]: E1018 10:30:56.649261     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:31:03 old-k8s-version-309062 kubelet[776]: I1018 10:31:03.668973     776 scope.go:117] "RemoveContainer" containerID="df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5"
	Oct 18 10:31:03 old-k8s-version-309062 kubelet[776]: E1018 10:31:03.670103     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:31:13 old-k8s-version-309062 kubelet[776]: I1018 10:31:13.816497     776 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 10:31:13 old-k8s-version-309062 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:31:13 old-k8s-version-309062 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:31:13 old-k8s-version-309062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1221147b76c646921a654388aa40c79193a87c89eee11d8ab529f19b710f6028] <==
	2025/10/18 10:30:38 Using namespace: kubernetes-dashboard
	2025/10/18 10:30:38 Using in-cluster config to connect to apiserver
	2025/10/18 10:30:38 Using secret token for csrf signing
	2025/10/18 10:30:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 10:30:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 10:30:38 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 10:30:38 Generating JWE encryption key
	2025/10/18 10:30:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 10:30:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 10:30:38 Initializing JWE encryption key from synchronized object
	2025/10/18 10:30:38 Creating in-cluster Sidecar client
	2025/10/18 10:30:38 Serving insecurely on HTTP port: 9090
	2025/10/18 10:30:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:31:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:30:38 Starting overwatch
	
	
	==> storage-provisioner [28b068be6ba6cf1e3afbc8ec9e6600adf10e615cd713b32937ec5dcd20863c29] <==
	I1018 10:30:53.689087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 10:30:53.704991       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:30:53.705130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 10:31:11.104356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:31:11.104529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-309062_e5d78c40-c279-4a12-8217-aa8dd528ca86!
	I1018 10:31:11.105687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30bc5ecc-ff23-48f8-9195-73f60d25bbff", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-309062_e5d78c40-c279-4a12-8217-aa8dd528ca86 became leader
	I1018 10:31:11.205690       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-309062_e5d78c40-c279-4a12-8217-aa8dd528ca86!
	
	
	==> storage-provisioner [fe5b69f4fe8d8244c0c04923326903928bd6aa32735e98e900cef0e8929410f7] <==
	I1018 10:30:23.121654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 10:30:53.123479       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-309062 -n old-k8s-version-309062
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-309062 -n old-k8s-version-309062: exit status 2 (503.809957ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-309062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-309062
helpers_test.go:243: (dbg) docker inspect old-k8s-version-309062:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750",
	        "Created": "2025-10-18T10:28:48.73837051Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470547,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:30:08.554919373Z",
	            "FinishedAt": "2025-10-18T10:30:07.717738965Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/hostname",
	        "HostsPath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/hosts",
	        "LogPath": "/var/lib/docker/containers/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750-json.log",
	        "Name": "/old-k8s-version-309062",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-309062:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-309062",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750",
	                "LowerDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656/merged",
	                "UpperDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656/diff",
	                "WorkDir": "/var/lib/docker/overlay2/76f2ddbb8a111823c1151fde350c303f28ae9e1b59f3c48b606ee26f7eb90656/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-309062",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-309062/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-309062",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-309062",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-309062",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f85fd6f80e49d2a568ed20f0d7633966608aa833dd63afb6827f09acf4992782",
	            "SandboxKey": "/var/run/docker/netns/f85fd6f80e49",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-309062": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:59:28:a5:e4:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "082c8a75e8eb3b8d93bfcaf0e7df425e066e901e2d22d2638140f1c9d2501c82",
	                    "EndpointID": "772e315ccce4b0ba90a5e85294dffa9bdc7ef2f2357aecc1087845c5dad38089",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-309062",
	                        "ef75e2f86668"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309062 -n old-k8s-version-309062
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309062 -n old-k8s-version-309062: exit status 2 (594.731079ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-309062 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-309062 logs -n 25: (1.867718821s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-881658 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo containerd config dump                                                                                                                                                                                                  │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo crio config                                                                                                                                                                                                             │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ delete  │ -p cilium-881658                                                                                                                                                                                                                              │ cilium-881658            │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:27 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-733799   │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-360583                                                                                                                                                                                                                   │ force-systemd-env-360583 │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-233372 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-233372 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-233372 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-233372                                                                                                                                                                                                                        │ cert-options-233372      │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │                     │
	│ stop    │ -p old-k8s-version-309062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │ 18 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-309062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-733799   │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	│ image   │ old-k8s-version-309062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-309062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-309062   │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:31:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:31:06.940374  472598 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:31:06.940504  472598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:06.940508  472598 out.go:374] Setting ErrFile to fd 2...
	I1018 10:31:06.940512  472598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:06.940819  472598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:31:06.941578  472598 out.go:368] Setting JSON to false
	I1018 10:31:06.943138  472598 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8017,"bootTime":1760775450,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:31:06.943221  472598 start.go:141] virtualization:  
	I1018 10:31:06.946802  472598 out.go:179] * [cert-expiration-733799] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:31:06.951662  472598 notify.go:220] Checking for updates...
	I1018 10:31:06.955280  472598 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:31:06.958365  472598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:31:06.961276  472598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:31:06.964120  472598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:31:06.967068  472598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:31:06.969914  472598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:31:06.973371  472598 config.go:182] Loaded profile config "cert-expiration-733799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:06.974120  472598 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:31:07.004620  472598 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:31:07.004757  472598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:31:07.066716  472598 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 10:31:07.057023295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:31:07.066812  472598 docker.go:318] overlay module found
	I1018 10:31:07.070174  472598 out.go:179] * Using the docker driver based on existing profile
	I1018 10:31:07.073086  472598 start.go:305] selected driver: docker
	I1018 10:31:07.073096  472598 start.go:925] validating driver "docker" against &{Name:cert-expiration-733799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-733799 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:07.073272  472598 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:31:07.074005  472598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:31:07.139257  472598 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 10:31:07.129137024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:31:07.139585  472598 cni.go:84] Creating CNI manager for ""
	I1018 10:31:07.139644  472598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:31:07.139684  472598 start.go:349] cluster config:
	{Name:cert-expiration-733799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-733799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:07.142871  472598 out.go:179] * Starting "cert-expiration-733799" primary control-plane node in "cert-expiration-733799" cluster
	I1018 10:31:07.145742  472598 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:31:07.148714  472598 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:31:07.151831  472598 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:07.151866  472598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:31:07.151885  472598 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:31:07.151895  472598 cache.go:58] Caching tarball of preloaded images
	I1018 10:31:07.151986  472598 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:31:07.151995  472598 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:31:07.152117  472598 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/config.json ...
	I1018 10:31:07.175761  472598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:31:07.175773  472598 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:31:07.175793  472598 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:31:07.175815  472598 start.go:360] acquireMachinesLock for cert-expiration-733799: {Name:mk4e0847b4c10db23105e96816f6db85cd8efa9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:31:07.175877  472598 start.go:364] duration metric: took 46.097µs to acquireMachinesLock for "cert-expiration-733799"
	I1018 10:31:07.175896  472598 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:31:07.175900  472598 fix.go:54] fixHost starting: 
	I1018 10:31:07.176174  472598 cli_runner.go:164] Run: docker container inspect cert-expiration-733799 --format={{.State.Status}}
	I1018 10:31:07.193271  472598 fix.go:112] recreateIfNeeded on cert-expiration-733799: state=Running err=<nil>
	W1018 10:31:07.193291  472598 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 10:31:07.196467  472598 out.go:252] * Updating the running docker "cert-expiration-733799" container ...
	I1018 10:31:07.196502  472598 machine.go:93] provisionDockerMachine start ...
	I1018 10:31:07.196579  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:07.214592  472598 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:07.214908  472598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1018 10:31:07.214915  472598 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:31:07.364345  472598 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-733799
	
	I1018 10:31:07.364370  472598 ubuntu.go:182] provisioning hostname "cert-expiration-733799"
	I1018 10:31:07.364461  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:07.384871  472598 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:07.385388  472598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1018 10:31:07.385400  472598 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-733799 && echo "cert-expiration-733799" | sudo tee /etc/hostname
	I1018 10:31:07.547479  472598 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-733799
	
	I1018 10:31:07.547569  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:07.566709  472598 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:07.567013  472598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1018 10:31:07.567028  472598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-733799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-733799/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-733799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:31:07.717600  472598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:31:07.717625  472598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:31:07.717654  472598 ubuntu.go:190] setting up certificates
	I1018 10:31:07.717663  472598 provision.go:84] configureAuth start
	I1018 10:31:07.717720  472598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-733799
	I1018 10:31:07.736842  472598 provision.go:143] copyHostCerts
	I1018 10:31:07.736909  472598 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:31:07.736929  472598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:31:07.737005  472598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:31:07.737098  472598 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:31:07.737102  472598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:31:07.737124  472598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:31:07.737171  472598 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:31:07.737174  472598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:31:07.737350  472598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:31:07.737412  472598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-733799 san=[127.0.0.1 192.168.85.2 cert-expiration-733799 localhost minikube]
	I1018 10:31:08.124478  472598 provision.go:177] copyRemoteCerts
	I1018 10:31:08.124538  472598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:31:08.124590  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:08.144194  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:08.250016  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:31:08.269316  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 10:31:08.289794  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:31:08.309253  472598 provision.go:87] duration metric: took 591.570359ms to configureAuth
	I1018 10:31:08.309271  472598 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:31:08.309454  472598 config.go:182] Loaded profile config "cert-expiration-733799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:08.309561  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:08.327205  472598 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:08.327518  472598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1018 10:31:08.327537  472598 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:31:13.732325  472598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:31:13.732338  472598 machine.go:96] duration metric: took 6.535830085s to provisionDockerMachine
	I1018 10:31:13.732347  472598 start.go:293] postStartSetup for "cert-expiration-733799" (driver="docker")
	I1018 10:31:13.732357  472598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:31:13.732427  472598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:31:13.732464  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:13.761282  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:13.876578  472598 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:31:13.880509  472598 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:31:13.880526  472598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:31:13.880562  472598 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:31:13.880624  472598 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:31:13.880703  472598 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:31:13.880800  472598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:31:13.888965  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:13.907546  472598 start.go:296] duration metric: took 175.182948ms for postStartSetup
	I1018 10:31:13.907633  472598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:31:13.907672  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:13.928848  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:14.035342  472598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:31:14.040700  472598 fix.go:56] duration metric: took 6.864792216s for fixHost
	I1018 10:31:14.040715  472598 start.go:83] releasing machines lock for "cert-expiration-733799", held for 6.864831412s
	I1018 10:31:14.040784  472598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-733799
	I1018 10:31:14.058794  472598 ssh_runner.go:195] Run: cat /version.json
	I1018 10:31:14.058846  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:14.059100  472598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:31:14.059150  472598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-733799
	I1018 10:31:14.079349  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:14.080869  472598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/cert-expiration-733799/id_rsa Username:docker}
	I1018 10:31:14.299432  472598 ssh_runner.go:195] Run: systemctl --version
	I1018 10:31:14.306373  472598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:31:14.404276  472598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:31:14.414048  472598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:31:14.414133  472598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:31:14.424208  472598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:31:14.424221  472598 start.go:495] detecting cgroup driver to use...
	I1018 10:31:14.424251  472598 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:31:14.424295  472598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:31:14.459427  472598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:31:14.474433  472598 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:31:14.474487  472598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:31:14.491215  472598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:31:14.506292  472598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:31:14.678665  472598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:31:14.826999  472598 docker.go:234] disabling docker service ...
	I1018 10:31:14.827065  472598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:31:14.842394  472598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:31:14.855825  472598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:31:15.004836  472598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:31:15.209168  472598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:31:15.223605  472598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:31:15.239534  472598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:31:15.239598  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.252769  472598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:31:15.252825  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.263685  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.280108  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.293455  472598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:31:15.301720  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.311990  472598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.321335  472598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:15.331446  472598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:31:15.343957  472598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:31:15.351873  472598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:15.531270  472598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:31:15.751258  472598 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:31:15.751315  472598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:31:15.755237  472598 start.go:563] Will wait 60s for crictl version
	I1018 10:31:15.755291  472598 ssh_runner.go:195] Run: which crictl
	I1018 10:31:15.758690  472598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:31:15.792587  472598 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:31:15.792688  472598 ssh_runner.go:195] Run: crio --version
	I1018 10:31:15.833058  472598 ssh_runner.go:195] Run: crio --version
	I1018 10:31:15.875645  472598 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:31:15.879302  472598 cli_runner.go:164] Run: docker network inspect cert-expiration-733799 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:31:15.903950  472598 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:31:15.908610  472598 kubeadm.go:883] updating cluster {Name:cert-expiration-733799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-733799 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:31:15.908708  472598 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:15.908771  472598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:15.962219  472598 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:15.962230  472598 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:31:15.962291  472598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:16.008870  472598 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:16.008883  472598 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:31:16.008890  472598 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:31:16.009621  472598 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-733799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-733799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:31:16.009724  472598 ssh_runner.go:195] Run: crio config
	I1018 10:31:16.075074  472598 cni.go:84] Creating CNI manager for ""
	I1018 10:31:16.075087  472598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:31:16.075105  472598 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:31:16.075158  472598 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-733799 NodeName:cert-expiration-733799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:31:16.075322  472598 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-733799"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:31:16.075407  472598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:31:16.087680  472598 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:31:16.087767  472598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:31:16.100583  472598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 10:31:16.118259  472598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:31:16.131918  472598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1018 10:31:16.148566  472598 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:31:16.153205  472598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:16.338257  472598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:31:16.353629  472598 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799 for IP: 192.168.85.2
	I1018 10:31:16.353639  472598 certs.go:195] generating shared ca certs ...
	I1018 10:31:16.353653  472598 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:16.353804  472598 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:31:16.353842  472598 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:31:16.353847  472598 certs.go:257] generating profile certs ...
	W1018 10:31:16.353972  472598 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1018 10:31:16.353988  472598 certs.go:624] cert expired /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/client.crt: expiration: 2025-10-18 10:30:40 +0000 UTC, now: 2025-10-18 10:31:16.353983962 +0000 UTC m=+9.470419764
	I1018 10:31:16.354099  472598 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/client.key
	I1018 10:31:16.354113  472598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/client.crt with IP's: []
	I1018 10:31:16.926947  472598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/client.crt ...
	I1018 10:31:16.926970  472598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/client.crt: {Name:mk3f4bf3e402e2d12e2e6cd120bfd3e8d5d28146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:16.927107  472598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/client.key ...
	I1018 10:31:16.927116  472598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/client.key: {Name:mka22190f36b7b26e8884d5b1a1d074443a7236a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1018 10:31:16.927275  472598 out.go:285] ! Certificate apiserver.crt.ba217d6e has expired. Generating a new one...
	I1018 10:31:16.927302  472598 certs.go:624] cert expired /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.crt.ba217d6e: expiration: 2025-10-18 10:30:40 +0000 UTC, now: 2025-10-18 10:31:16.927296015 +0000 UTC m=+10.043731834
	I1018 10:31:16.927373  472598 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.key.ba217d6e
	I1018 10:31:16.927387  472598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.crt.ba217d6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 10:31:17.661589  472598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.crt.ba217d6e ...
	I1018 10:31:17.661604  472598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.crt.ba217d6e: {Name:mk088c3d3ee3b86da726b694011336776cda426a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:17.661793  472598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.key.ba217d6e ...
	I1018 10:31:17.661805  472598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.key.ba217d6e: {Name:mk8f3afdaa4a122b50db078655a2403c7fbeb77d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:17.661872  472598 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.crt.ba217d6e -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.crt
	I1018 10:31:17.662074  472598 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.key.ba217d6e -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.key
	W1018 10:31:17.662247  472598 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1018 10:31:17.662271  472598 certs.go:624] cert expired /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/proxy-client.crt: expiration: 2025-10-18 10:30:41 +0000 UTC, now: 2025-10-18 10:31:17.6622662 +0000 UTC m=+10.778702018
	I1018 10:31:17.662335  472598 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/proxy-client.key
	I1018 10:31:17.662349  472598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/proxy-client.crt with IP's: []
	I1018 10:31:18.186469  472598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/proxy-client.crt ...
	I1018 10:31:18.186488  472598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/proxy-client.crt: {Name:mk881fad5d50b8c7274d8f90e29838b73e788c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:18.186669  472598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/proxy-client.key ...
	I1018 10:31:18.186676  472598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/proxy-client.key: {Name:mke280a5d2f0cc8114d23fb61683f9e09b560256 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:18.186865  472598 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:31:18.186902  472598 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:31:18.186910  472598 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:31:18.186934  472598 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:31:18.186956  472598 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:31:18.186977  472598 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:31:18.187019  472598 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:18.187576  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:31:18.213155  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:31:18.242117  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:31:18.268521  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:31:18.314521  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 10:31:18.354819  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:31:18.377619  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:31:18.423625  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/cert-expiration-733799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:31:18.443071  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:31:18.477360  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:31:18.509272  472598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:31:18.556239  472598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:31:18.613246  472598 ssh_runner.go:195] Run: openssl version
	I1018 10:31:18.622638  472598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:31:18.649760  472598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:31:18.671785  472598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:31:18.671836  472598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:31:18.754141  472598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:31:18.767845  472598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:31:18.786345  472598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:31:18.796359  472598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:31:18.796415  472598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:31:18.874569  472598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:31:18.889298  472598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:31:18.905369  472598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:18.912354  472598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:18.912407  472598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:18.985544  472598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:31:19.002951  472598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:31:19.007483  472598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:31:19.104477  472598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:31:19.233943  472598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:31:19.327146  472598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:31:19.406167  472598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:31:19.497711  472598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:31:19.608470  472598 kubeadm.go:400] StartCluster: {Name:cert-expiration-733799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-733799 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:19.608540  472598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:31:19.608627  472598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:31:19.681991  472598 cri.go:89] found id: "b9a676ba14a1d7b4168be448f06c6a2e7e4c3c3b416bbcf39d3784652e4e7b43"
	I1018 10:31:19.682002  472598 cri.go:89] found id: "15ad4d1fa6c4ff25e3f37c562d102917d4b3d39b86a32693c889778ff6cf60ef"
	I1018 10:31:19.682006  472598 cri.go:89] found id: "d5fd7cc882661257e80222963aa4ef92e1bf7dc110970865e7341d4abfb332a7"
	I1018 10:31:19.682009  472598 cri.go:89] found id: "961edd59ba0a6573eae7e158b27feb3638f2ac5921aa8bee673ffaac17aceb3a"
	I1018 10:31:19.682011  472598 cri.go:89] found id: "722db29810bfc4cbc0a59b734273bae0645c6cb364b74e99067c8866ccd44b80"
	I1018 10:31:19.682014  472598 cri.go:89] found id: "0b7b65571ba258457b57110ba205cfa74e11fdfb9e2eb571d4f91c454cc47f36"
	I1018 10:31:19.682016  472598 cri.go:89] found id: "b9908b772681b6cd682ad028ec57c0c13ad8410b8e9986bf3f25f36068caf4d1"
	I1018 10:31:19.682018  472598 cri.go:89] found id: "e02c6421f56c7db7f1a6024c0a5f4bd96a557ad7036871f32839551b3d4a259e"
	I1018 10:31:19.682020  472598 cri.go:89] found id: "5be3b6f1f5e8cc69177f3e48d7e0aac50d27889facf814faa0a16a7df1775a80"
	I1018 10:31:19.682027  472598 cri.go:89] found id: "c747d73325a5220ff9965dbd7d02a12568d7aef962634fa087dc94651bdd1b99"
	I1018 10:31:19.682030  472598 cri.go:89] found id: "1ecf5753aaab535a587869d7c8d55a4bdcbeb0082ddc955919243867086cb9fb"
	I1018 10:31:19.682032  472598 cri.go:89] found id: "439528a7ed91b52b69c0adf101d79a7bf0e5c02af2e5001a8c78cb5083e6cda3"
	I1018 10:31:19.682034  472598 cri.go:89] found id: "0c4d8a8649b205e3561fc7f2a081b10aad0318ba5abbb89dffa301b840126052"
	I1018 10:31:19.682037  472598 cri.go:89] found id: "659f30b05c99a2fb118928e4308e7a7af47c8d83901be8c7d3afb291cc6ac23f"
	I1018 10:31:19.682039  472598 cri.go:89] found id: "770aa1bf7dda64fddca7347847e9d122a17c9e5a01fb08a5143fc8d8ff6420b9"
	I1018 10:31:19.682043  472598 cri.go:89] found id: "b8985fa7c8b93ad1baea2f56fbfaeadc88bb7f7b051e25c45ad99d10acb92020"
	I1018 10:31:19.682045  472598 cri.go:89] found id: ""
	I1018 10:31:19.682133  472598 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:31:19.718440  472598 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:31:19Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:31:19.718511  472598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:31:19.734592  472598 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:31:19.734601  472598 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:31:19.734650  472598 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:31:19.746367  472598 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:31:19.747040  472598 kubeconfig.go:125] found "cert-expiration-733799" server: "https://192.168.85.2:8443"
	I1018 10:31:19.748661  472598 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:31:19.767503  472598 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 10:31:19.767530  472598 kubeadm.go:601] duration metric: took 32.92451ms to restartPrimaryControlPlane
	I1018 10:31:19.767537  472598 kubeadm.go:402] duration metric: took 159.078529ms to StartCluster
	I1018 10:31:19.767550  472598 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:19.767608  472598 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:31:19.768480  472598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:19.768701  472598 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:31:19.768990  472598 config.go:182] Loaded profile config "cert-expiration-733799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:19.769044  472598 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:31:19.769107  472598 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-733799"
	I1018 10:31:19.769120  472598 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-733799"
	W1018 10:31:19.769124  472598 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:31:19.769144  472598 host.go:66] Checking if "cert-expiration-733799" exists ...
	I1018 10:31:19.769571  472598 cli_runner.go:164] Run: docker container inspect cert-expiration-733799 --format={{.State.Status}}
	I1018 10:31:19.771853  472598 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-733799"
	I1018 10:31:19.771872  472598 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-733799"
	I1018 10:31:19.772199  472598 cli_runner.go:164] Run: docker container inspect cert-expiration-733799 --format={{.State.Status}}
	I1018 10:31:19.781316  472598 out.go:179] * Verifying Kubernetes components...
	I1018 10:31:19.784323  472598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:19.829527  472598 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.430259892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.436797035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.437565714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.452968552Z" level=info msg="Created container df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb/dashboard-metrics-scraper" id=0c588605-5b5f-41f8-ac1a-4daea9878635 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.453968019Z" level=info msg="Starting container: df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5" id=d2ef1f92-06b9-453e-9137-dbf55e3b5837 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.457715511Z" level=info msg="Started container" PID=1634 containerID=df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb/dashboard-metrics-scraper id=d2ef1f92-06b9-453e-9137-dbf55e3b5837 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c002591376b626cf1d60e09c810a299f70d9961d351a962d92482ea5359d7697
	Oct 18 10:30:56 old-k8s-version-309062 conmon[1632]: conmon df2bceb48eb6286384fe <ninfo>: container 1634 exited with status 1
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.650552286Z" level=info msg="Removing container: e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0" id=4c5be669-0125-42a6-9642-916696aa8f4b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.658634532Z" level=info msg="Error loading conmon cgroup of container e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0: cgroup deleted" id=4c5be669-0125-42a6-9642-916696aa8f4b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:30:56 old-k8s-version-309062 crio[647]: time="2025-10-18T10:30:56.662140086Z" level=info msg="Removed container e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb/dashboard-metrics-scraper" id=4c5be669-0125-42a6-9642-916696aa8f4b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.626199321Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.630374444Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.630410752Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.630434498Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.633795108Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.633831293Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.633856622Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.637469008Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.637504274Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.637530284Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.64079957Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.640834918Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.640862521Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.644161107Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:31:02 old-k8s-version-309062 crio[647]: time="2025-10-18T10:31:02.644196127Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	df2bceb48eb62       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   c002591376b62       dashboard-metrics-scraper-5f989dc9cf-zglwb       kubernetes-dashboard
	28b068be6ba6c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   2296118fb243c       storage-provisioner                              kube-system
	1221147b76c64       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   babf1ea1dded5       kubernetes-dashboard-8694d4445c-gt5x2            kubernetes-dashboard
	06872f9d98ba5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   fdb0ba21dcf40       busybox                                          default
	f3a8ae82b8c31       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           58 seconds ago       Running             coredns                     1                   3214cf0c2cacd       coredns-5dd5756b68-4hhdr                         kube-system
	6ae7dbec44172       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   63246fe301f39       kindnet-fqnmf                                    kube-system
	d4018cae69b9b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           58 seconds ago       Running             kube-proxy                  1                   4e26ed2776cec       kube-proxy-xvwns                                 kube-system
	fe5b69f4fe8d8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   2296118fb243c       storage-provisioner                              kube-system
	cba131a162c9f       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   e0a856d26b336       kube-controller-manager-old-k8s-version-309062   kube-system
	0cb67d3420bb2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   d947735e30e55       etcd-old-k8s-version-309062                      kube-system
	c26f04e131bc5       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   6eb03bbab5abd       kube-scheduler-old-k8s-version-309062            kube-system
	a5dd6148e2e53       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   ade6551731682       kube-apiserver-old-k8s-version-309062            kube-system
	
	
	==> coredns [f3a8ae82b8c31103a9aa668ee614af5b2764449e218994b6d3ae42ddd5d15820] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50728 - 31986 "HINFO IN 7065832067986444152.1021077917737199552. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021954631s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-309062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-309062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=old-k8s-version-309062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_29_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:29:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-309062
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:31:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:30:51 +0000   Sat, 18 Oct 2025 10:29:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:30:51 +0000   Sat, 18 Oct 2025 10:29:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:30:51 +0000   Sat, 18 Oct 2025 10:29:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:30:51 +0000   Sat, 18 Oct 2025 10:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-309062
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                98c6c6ec-8267-4a2c-858a-d465056e6aea
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-4hhdr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-old-k8s-version-309062                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-fqnmf                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-old-k8s-version-309062             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-old-k8s-version-309062    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-xvwns                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-old-k8s-version-309062             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-zglwb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gt5x2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 57s                    kube-proxy       
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-309062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m7s                   kubelet          Node old-k8s-version-309062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m7s                   kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m7s                   kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           116s                   node-controller  Node old-k8s-version-309062 event: Registered Node old-k8s-version-309062 in Controller
	  Normal  NodeReady                101s                   kubelet          Node old-k8s-version-309062 status is now: NodeReady
	  Normal  Starting                 66s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node old-k8s-version-309062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node old-k8s-version-309062 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                    node-controller  Node old-k8s-version-309062 event: Registered Node old-k8s-version-309062 in Controller
	
	
	==> dmesg <==
	[Oct18 10:05] overlayfs: idmapped layers are currently not supported
	[Oct18 10:10] overlayfs: idmapped layers are currently not supported
	[ +35.463301] overlayfs: idmapped layers are currently not supported
	[Oct18 10:11] overlayfs: idmapped layers are currently not supported
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0cb67d3420bb2266844f350fcf9b4b39a84e2336671d33e7f75ac5c9327f4f9b] <==
	{"level":"info","ts":"2025-10-18T10:30:16.338722Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T10:30:16.338817Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T10:30:16.339109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T10:30:16.339622Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T10:30:16.340605Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T10:30:16.341383Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T10:30:16.356452Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T10:30:16.357686Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T10:30:16.375885Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T10:30:16.424835Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T10:30:16.427465Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T10:30:17.334693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T10:30:17.334806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T10:30:17.334843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T10:30:17.334881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T10:30:17.334927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T10:30:17.334968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T10:30:17.334998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T10:30:17.339511Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-309062 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T10:30:17.339605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T10:30:17.3406Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T10:30:17.340855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T10:30:17.341797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T10:30:17.345225Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T10:30:17.345312Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:31:21 up  2:13,  0 user,  load average: 2.98, 3.55, 2.85
	Linux old-k8s-version-309062 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ae7dbec44172407905dbebfe46f720674c9f2a6f90db903589de445d94e3e52] <==
	I1018 10:30:22.416248       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:30:22.425485       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:30:22.425688       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:30:22.425701       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:30:22.425715       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:30:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:30:22.624117       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:30:22.631098       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:30:22.631239       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:30:22.631466       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:30:52.624190       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 10:30:52.633015       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:30:52.710488       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 10:30:52.711495       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1018 10:30:54.131692       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:30:54.131720       1 metrics.go:72] Registering metrics
	I1018 10:30:54.131771       1 controller.go:711] "Syncing nftables rules"
	I1018 10:31:02.625889       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:31:02.625947       1 main.go:301] handling current node
	I1018 10:31:12.629430       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:31:12.629466       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5dd6148e2e536dc32d642fcbf3fcb348930710fc9902f0cd2429867c75a933d] <==
	I1018 10:30:20.150477       1 controller.go:78] Starting OpenAPI AggregationController
	I1018 10:30:20.542134       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 10:30:20.542207       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 10:30:20.548384       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 10:30:20.548703       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 10:30:20.549747       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 10:30:20.549771       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 10:30:20.550081       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 10:30:20.550125       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 10:30:20.551737       1 aggregator.go:166] initial CRD sync complete...
	I1018 10:30:20.551762       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 10:30:20.551769       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 10:30:20.551776       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:30:20.605980       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:30:21.281551       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:30:23.120716       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 10:30:23.180379       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 10:30:23.217551       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:30:23.230857       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:30:23.244594       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 10:30:23.351496       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.204.122"}
	I1018 10:30:23.405061       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.161.76"}
	I1018 10:30:33.291191       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 10:30:33.318799       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 10:30:33.355184       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cba131a162c9f2548c2ed732bf800e1f1257451692a681ba3d9bdb6f674084dc] <==
	I1018 10:30:33.342804       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-zglwb"
	I1018 10:30:33.342835       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-gt5x2"
	I1018 10:30:33.372877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.286242ms"
	I1018 10:30:33.374069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.945364ms"
	I1018 10:30:33.377898       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1018 10:30:33.387751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.590534ms"
	I1018 10:30:33.388001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.687µs"
	I1018 10:30:33.396524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.192µs"
	I1018 10:30:33.406518       1 shared_informer.go:318] Caches are synced for attach detach
	I1018 10:30:33.407688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.75628ms"
	I1018 10:30:33.407766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.864µs"
	I1018 10:30:33.411222       1 shared_informer.go:318] Caches are synced for persistent volume
	I1018 10:30:33.432242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="39.967µs"
	I1018 10:30:33.809664       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 10:30:33.832082       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 10:30:33.832113       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 10:30:38.632298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.147272ms"
	I1018 10:30:38.632537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.794µs"
	I1018 10:30:42.632613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="269.672µs"
	I1018 10:30:43.637116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.381µs"
	I1018 10:30:44.632359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.23µs"
	I1018 10:30:56.671691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.614µs"
	I1018 10:30:58.673432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.806962ms"
	I1018 10:30:58.673609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.081µs"
	I1018 10:31:03.687141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.498µs"
	
	
	==> kube-proxy [d4018cae69b9bfaf869931ec009785bb9133d02e8a0d3e946390b18d7dd19a77] <==
	I1018 10:30:23.247671       1 server_others.go:69] "Using iptables proxy"
	I1018 10:30:23.282843       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 10:30:23.429035       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:30:23.432454       1 server_others.go:152] "Using iptables Proxier"
	I1018 10:30:23.432552       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 10:30:23.432596       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 10:30:23.432650       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 10:30:23.432886       1 server.go:846] "Version info" version="v1.28.0"
	I1018 10:30:23.433152       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:30:23.434687       1 config.go:188] "Starting service config controller"
	I1018 10:30:23.434773       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 10:30:23.434819       1 config.go:97] "Starting endpoint slice config controller"
	I1018 10:30:23.434845       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 10:30:23.436161       1 config.go:315] "Starting node config controller"
	I1018 10:30:23.436228       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 10:30:23.534965       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 10:30:23.535023       1 shared_informer.go:318] Caches are synced for service config
	I1018 10:30:23.536436       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c26f04e131bc5e297415a1e4c9e06a6a5e26b988a8b4b5335276049aefdc00d0] <==
	I1018 10:30:19.866527       1 serving.go:348] Generated self-signed cert in-memory
	I1018 10:30:23.352314       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 10:30:23.352394       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:30:23.358344       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 10:30:23.358539       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1018 10:30:23.358605       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1018 10:30:23.358658       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 10:30:23.364942       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:30:23.365724       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 10:30:23.365815       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:30:23.365884       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1018 10:30:23.460413       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1018 10:30:23.469208       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1018 10:30:23.469320       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: I1018 10:30:33.398320     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fa2d2419-2697-4b0f-8b80-c51fb742e12c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-gt5x2\" (UID: \"fa2d2419-2697-4b0f-8b80-c51fb742e12c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gt5x2"
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: I1018 10:30:33.398430     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a1b11f6f-40ca-4dc0-a12b-a7af607494ea-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-zglwb\" (UID: \"a1b11f6f-40ca-4dc0-a12b-a7af607494ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb"
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: I1018 10:30:33.398543     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9p8\" (UniqueName: \"kubernetes.io/projected/fa2d2419-2697-4b0f-8b80-c51fb742e12c-kube-api-access-gr9p8\") pod \"kubernetes-dashboard-8694d4445c-gt5x2\" (UID: \"fa2d2419-2697-4b0f-8b80-c51fb742e12c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gt5x2"
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: W1018 10:30:33.689803     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/crio-babf1ea1dded50b4ee9573c2d6e6f864e2dc2d86f80674c6a6b7211c8a43d65b WatchSource:0}: Error finding container babf1ea1dded50b4ee9573c2d6e6f864e2dc2d86f80674c6a6b7211c8a43d65b: Status 404 returned error can't find the container with id babf1ea1dded50b4ee9573c2d6e6f864e2dc2d86f80674c6a6b7211c8a43d65b
	Oct 18 10:30:33 old-k8s-version-309062 kubelet[776]: W1018 10:30:33.702904     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ef75e2f8666843437e98a4ab897690a0fa2f9ef30a923a6fc2d44c149c006750/crio-c002591376b626cf1d60e09c810a299f70d9961d351a962d92482ea5359d7697 WatchSource:0}: Error finding container c002591376b626cf1d60e09c810a299f70d9961d351a962d92482ea5359d7697: Status 404 returned error can't find the container with id c002591376b626cf1d60e09c810a299f70d9961d351a962d92482ea5359d7697
	Oct 18 10:30:38 old-k8s-version-309062 kubelet[776]: I1018 10:30:38.614459     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gt5x2" podStartSLOduration=1.244251068 podCreationTimestamp="2025-10-18 10:30:33 +0000 UTC" firstStartedPulling="2025-10-18 10:30:33.694180289 +0000 UTC m=+18.422322981" lastFinishedPulling="2025-10-18 10:30:38.064313132 +0000 UTC m=+22.792455832" observedRunningTime="2025-10-18 10:30:38.613841523 +0000 UTC m=+23.341984223" watchObservedRunningTime="2025-10-18 10:30:38.614383919 +0000 UTC m=+23.342526611"
	Oct 18 10:30:42 old-k8s-version-309062 kubelet[776]: I1018 10:30:42.608330     776 scope.go:117] "RemoveContainer" containerID="3c21f9ad39ad423d635fa43d8991784cc90226d5d254f907fa623a595c613683"
	Oct 18 10:30:43 old-k8s-version-309062 kubelet[776]: I1018 10:30:43.612254     776 scope.go:117] "RemoveContainer" containerID="3c21f9ad39ad423d635fa43d8991784cc90226d5d254f907fa623a595c613683"
	Oct 18 10:30:43 old-k8s-version-309062 kubelet[776]: I1018 10:30:43.612777     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:43 old-k8s-version-309062 kubelet[776]: E1018 10:30:43.613099     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:30:44 old-k8s-version-309062 kubelet[776]: I1018 10:30:44.616683     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:44 old-k8s-version-309062 kubelet[776]: E1018 10:30:44.616995     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:30:45 old-k8s-version-309062 kubelet[776]: I1018 10:30:45.618501     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:45 old-k8s-version-309062 kubelet[776]: E1018 10:30:45.618810     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:30:53 old-k8s-version-309062 kubelet[776]: I1018 10:30:53.638337     776 scope.go:117] "RemoveContainer" containerID="fe5b69f4fe8d8244c0c04923326903928bd6aa32735e98e900cef0e8929410f7"
	Oct 18 10:30:56 old-k8s-version-309062 kubelet[776]: I1018 10:30:56.427404     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:56 old-k8s-version-309062 kubelet[776]: I1018 10:30:56.648644     776 scope.go:117] "RemoveContainer" containerID="e139066a95b1cb2406d689addd272f72a877325975d4d6858c302efa0cdd9fe0"
	Oct 18 10:30:56 old-k8s-version-309062 kubelet[776]: I1018 10:30:56.648947     776 scope.go:117] "RemoveContainer" containerID="df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5"
	Oct 18 10:30:56 old-k8s-version-309062 kubelet[776]: E1018 10:30:56.649261     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:31:03 old-k8s-version-309062 kubelet[776]: I1018 10:31:03.668973     776 scope.go:117] "RemoveContainer" containerID="df2bceb48eb6286384fe88e5aaf40fa9a00026686b7d31781f358616df89bec5"
	Oct 18 10:31:03 old-k8s-version-309062 kubelet[776]: E1018 10:31:03.670103     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zglwb_kubernetes-dashboard(a1b11f6f-40ca-4dc0-a12b-a7af607494ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zglwb" podUID="a1b11f6f-40ca-4dc0-a12b-a7af607494ea"
	Oct 18 10:31:13 old-k8s-version-309062 kubelet[776]: I1018 10:31:13.816497     776 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 10:31:13 old-k8s-version-309062 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:31:13 old-k8s-version-309062 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:31:13 old-k8s-version-309062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1221147b76c646921a654388aa40c79193a87c89eee11d8ab529f19b710f6028] <==
	2025/10/18 10:30:38 Using namespace: kubernetes-dashboard
	2025/10/18 10:30:38 Using in-cluster config to connect to apiserver
	2025/10/18 10:30:38 Using secret token for csrf signing
	2025/10/18 10:30:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 10:30:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 10:30:38 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 10:30:38 Generating JWE encryption key
	2025/10/18 10:30:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 10:30:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 10:30:38 Initializing JWE encryption key from synchronized object
	2025/10/18 10:30:38 Creating in-cluster Sidecar client
	2025/10/18 10:30:38 Serving insecurely on HTTP port: 9090
	2025/10/18 10:30:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:31:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:30:38 Starting overwatch
	
	
	==> storage-provisioner [28b068be6ba6cf1e3afbc8ec9e6600adf10e615cd713b32937ec5dcd20863c29] <==
	I1018 10:30:53.689087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 10:30:53.704991       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:30:53.705130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 10:31:11.104356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:31:11.104529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-309062_e5d78c40-c279-4a12-8217-aa8dd528ca86!
	I1018 10:31:11.105687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30bc5ecc-ff23-48f8-9195-73f60d25bbff", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-309062_e5d78c40-c279-4a12-8217-aa8dd528ca86 became leader
	I1018 10:31:11.205690       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-309062_e5d78c40-c279-4a12-8217-aa8dd528ca86!
	
	
	==> storage-provisioner [fe5b69f4fe8d8244c0c04923326903928bd6aa32735e98e900cef0e8929410f7] <==
	I1018 10:30:23.121654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 10:30:53.123479       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-309062 -n old-k8s-version-309062
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-309062 -n old-k8s-version-309062: exit status 2 (616.167336ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-309062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (267.131701ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:33:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-715182 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-715182 describe deploy/metrics-server -n kube-system: exit status 1 (87.511837ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-715182 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-715182
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-715182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f",
	        "Created": "2025-10-18T10:31:31.395284928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475858,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:31:31.454755627Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/hosts",
	        "LogPath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f-json.log",
	        "Name": "/default-k8s-diff-port-715182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-715182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-715182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f",
	                "LowerDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-715182",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-715182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-715182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-715182",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-715182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4213dd091fccd3344aba389be05746d9c9fa40abfa493cc9001e021e318cab31",
	            "SandboxKey": "/var/run/docker/netns/4213dd091fcc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-715182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:0f:db:b3:19:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "788491100ff23209b4a58b30f7bb3bc0737bdeee77d901da545d647f4fa241c9",
	                    "EndpointID": "41bb4670558c471d0f739f1c5231269fff8fc43b2a4a1d89e1ecabfa81e5d90e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-715182",
	                        "2afd5447007b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-715182 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-715182 logs -n 25: (1.193806423s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-881658 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-881658                │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-881658                │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-881658                │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo crio config                                                                                                                                                                                                             │ cilium-881658                │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ delete  │ -p cilium-881658                                                                                                                                                                                                                              │ cilium-881658                │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:27 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-360583                                                                                                                                                                                                                   │ force-systemd-env-360583     │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-233372 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-233372 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-233372 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-233372                                                                                                                                                                                                                        │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │                     │
	│ stop    │ -p old-k8s-version-309062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │ 18 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-309062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-309062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-309062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-733799                                                                                                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:31:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:31:31.108759  475717 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:31:31.108957  475717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:31.108970  475717 out.go:374] Setting ErrFile to fd 2...
	I1018 10:31:31.108975  475717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:31.109285  475717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:31:31.109869  475717 out.go:368] Setting JSON to false
	I1018 10:31:31.111136  475717 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8042,"bootTime":1760775450,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:31:31.111211  475717 start.go:141] virtualization:  
	I1018 10:31:31.126546  475717 out.go:179] * [embed-certs-101897] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:31:31.158137  475717 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:31:31.158142  475717 notify.go:220] Checking for updates...
	I1018 10:31:31.190516  475717 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:31:31.223881  475717 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:31:31.230364  475717 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:31:31.235684  475717 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:31:31.245138  475717 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:31:26.593781  475082 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:31:26.594206  475082 start.go:159] libmachine.API.Create for "default-k8s-diff-port-715182" (driver="docker")
	I1018 10:31:26.594251  475082 client.go:168] LocalClient.Create starting
	I1018 10:31:26.594454  475082 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:31:26.594556  475082 main.go:141] libmachine: Decoding PEM data...
	I1018 10:31:26.594572  475082 main.go:141] libmachine: Parsing certificate...
	I1018 10:31:26.594669  475082 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:31:26.594714  475082 main.go:141] libmachine: Decoding PEM data...
	I1018 10:31:26.594729  475082 main.go:141] libmachine: Parsing certificate...
	I1018 10:31:26.595162  475082 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-715182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:31:26.615046  475082 cli_runner.go:211] docker network inspect default-k8s-diff-port-715182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:31:26.615136  475082 network_create.go:284] running [docker network inspect default-k8s-diff-port-715182] to gather additional debugging logs...
	I1018 10:31:26.615156  475082 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-715182
	W1018 10:31:26.635596  475082 cli_runner.go:211] docker network inspect default-k8s-diff-port-715182 returned with exit code 1
	I1018 10:31:26.635628  475082 network_create.go:287] error running [docker network inspect default-k8s-diff-port-715182]: docker network inspect default-k8s-diff-port-715182: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-715182 not found
	I1018 10:31:26.635642  475082 network_create.go:289] output of [docker network inspect default-k8s-diff-port-715182]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-715182 not found
	
	** /stderr **
	I1018 10:31:26.635785  475082 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:31:26.661737  475082 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:31:26.667969  475082 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:31:26.668396  475082 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:31:26.668842  475082 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019da690}
	I1018 10:31:26.668874  475082 network_create.go:124] attempt to create docker network default-k8s-diff-port-715182 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 10:31:26.668931  475082 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-715182 default-k8s-diff-port-715182
	I1018 10:31:26.754867  475082 network_create.go:108] docker network default-k8s-diff-port-715182 192.168.76.0/24 created
	I1018 10:31:26.754899  475082 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-715182" container
	I1018 10:31:26.754976  475082 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:31:26.786620  475082 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-715182 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-715182 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:31:26.809303  475082 oci.go:103] Successfully created a docker volume default-k8s-diff-port-715182
	I1018 10:31:26.809381  475082 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-715182-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-715182 --entrypoint /usr/bin/test -v default-k8s-diff-port-715182:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 10:31:27.479425  475082 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-715182
	I1018 10:31:27.479469  475082 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:27.479489  475082 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 10:31:27.479570  475082 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-715182:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 10:31:31.286351  475082 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-715182:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.806731968s)
	I1018 10:31:31.286378  475082 kic.go:203] duration metric: took 3.806886883s to extract preloaded images to volume ...
	W1018 10:31:31.286505  475082 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:31:31.286618  475082 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:31:31.249468  475717 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:31.249577  475717 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:31:31.269756  475717 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:31:31.269885  475717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:31:31.367860  475717 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-18 10:31:31.358396758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:31:31.367962  475717 docker.go:318] overlay module found
	I1018 10:31:31.371069  475717 out.go:179] * Using the docker driver based on user configuration
	I1018 10:31:31.373950  475717 start.go:305] selected driver: docker
	I1018 10:31:31.373967  475717 start.go:925] validating driver "docker" against <nil>
	I1018 10:31:31.373980  475717 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:31:31.374686  475717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:31:31.476454  475717 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:55 SystemTime:2025-10-18 10:31:31.466956134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:31:31.476627  475717 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 10:31:31.476854  475717 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:31:31.480084  475717 out.go:179] * Using Docker driver with root privileges
	I1018 10:31:31.483034  475717 cni.go:84] Creating CNI manager for ""
	I1018 10:31:31.483112  475717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:31:31.483125  475717 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 10:31:31.483206  475717 start.go:349] cluster config:
	{Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:31.486406  475717 out.go:179] * Starting "embed-certs-101897" primary control-plane node in "embed-certs-101897" cluster
	I1018 10:31:31.489155  475717 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:31:31.493022  475717 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:31:31.496006  475717 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:31.496064  475717 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:31:31.496073  475717 cache.go:58] Caching tarball of preloaded images
	I1018 10:31:31.496174  475717 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:31:31.496183  475717 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:31:31.496290  475717 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json ...
	I1018 10:31:31.496307  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json: {Name:mkd65c2fa6431ab96d83b9e3017962326c7db17d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:31.496463  475717 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:31:31.518743  475717 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:31:31.518762  475717 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:31:31.518833  475717 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:31:31.518905  475717 start.go:360] acquireMachinesLock for embed-certs-101897: {Name:mkdf4f50051bf510e5fec7789d20200884d252f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:31:31.519065  475717 start.go:364] duration metric: took 139.833µs to acquireMachinesLock for "embed-certs-101897"
	I1018 10:31:31.519142  475717 start.go:93] Provisioning new machine with config: &{Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:31:31.519236  475717 start.go:125] createHost starting for "" (driver="docker")
	I1018 10:31:31.523229  475717 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:31:31.523460  475717 start.go:159] libmachine.API.Create for "embed-certs-101897" (driver="docker")
	I1018 10:31:31.523502  475717 client.go:168] LocalClient.Create starting
	I1018 10:31:31.523580  475717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:31:31.523612  475717 main.go:141] libmachine: Decoding PEM data...
	I1018 10:31:31.523625  475717 main.go:141] libmachine: Parsing certificate...
	I1018 10:31:31.523679  475717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:31:31.523698  475717 main.go:141] libmachine: Decoding PEM data...
	I1018 10:31:31.523709  475717 main.go:141] libmachine: Parsing certificate...
	I1018 10:31:31.524055  475717 cli_runner.go:164] Run: docker network inspect embed-certs-101897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:31:31.546788  475717 cli_runner.go:211] docker network inspect embed-certs-101897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:31:31.546859  475717 network_create.go:284] running [docker network inspect embed-certs-101897] to gather additional debugging logs...
	I1018 10:31:31.546876  475717 cli_runner.go:164] Run: docker network inspect embed-certs-101897
	W1018 10:31:31.570029  475717 cli_runner.go:211] docker network inspect embed-certs-101897 returned with exit code 1
	I1018 10:31:31.570056  475717 network_create.go:287] error running [docker network inspect embed-certs-101897]: docker network inspect embed-certs-101897: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-101897 not found
	I1018 10:31:31.570071  475717 network_create.go:289] output of [docker network inspect embed-certs-101897]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-101897 not found
	
	** /stderr **
	I1018 10:31:31.570164  475717 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:31:31.588652  475717 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:31:31.588913  475717 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:31:31.589920  475717 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:31:31.590226  475717 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-788491100ff2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:73:3c:bb:41:b2} reservation:<nil>}
	I1018 10:31:31.590753  475717 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a84700}
	I1018 10:31:31.590775  475717 network_create.go:124] attempt to create docker network embed-certs-101897 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 10:31:31.590839  475717 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-101897 embed-certs-101897
	I1018 10:31:31.768059  475717 network_create.go:108] docker network embed-certs-101897 192.168.85.0/24 created
	I1018 10:31:31.768093  475717 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-101897" container
	I1018 10:31:31.768618  475717 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:31:31.800766  475717 cli_runner.go:164] Run: docker volume create embed-certs-101897 --label name.minikube.sigs.k8s.io=embed-certs-101897 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:31:31.838059  475717 oci.go:103] Successfully created a docker volume embed-certs-101897
	I1018 10:31:31.838140  475717 cli_runner.go:164] Run: docker run --rm --name embed-certs-101897-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-101897 --entrypoint /usr/bin/test -v embed-certs-101897:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 10:31:32.718381  475717 oci.go:107] Successfully prepared a docker volume embed-certs-101897
	I1018 10:31:32.718426  475717 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:32.718458  475717 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 10:31:32.718529  475717 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-101897:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 10:31:31.380407  475082 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-715182 --name default-k8s-diff-port-715182 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-715182 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-715182 --network default-k8s-diff-port-715182 --ip 192.168.76.2 --volume default-k8s-diff-port-715182:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:31:31.704433  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Running}}
	I1018 10:31:31.725568  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:31:31.762160  475082 cli_runner.go:164] Run: docker exec default-k8s-diff-port-715182 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:31:31.861956  475082 oci.go:144] the created container "default-k8s-diff-port-715182" has a running status.
	I1018 10:31:31.861997  475082 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa...
	I1018 10:31:32.305884  475082 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:31:32.338398  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:31:32.364595  475082 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:31:32.364614  475082 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-715182 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:31:32.461522  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:31:32.484677  475082 machine.go:93] provisionDockerMachine start ...
	I1018 10:31:32.484771  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:32.511905  475082 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:32.512241  475082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1018 10:31:32.512250  475082 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:31:32.517202  475082 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50940->127.0.0.1:33429: read: connection reset by peer
	I1018 10:31:35.672852  475082 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715182
	
	I1018 10:31:35.672876  475082 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-715182"
	I1018 10:31:35.672939  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:35.692400  475082 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:35.692742  475082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1018 10:31:35.692757  475082 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-715182 && echo "default-k8s-diff-port-715182" | sudo tee /etc/hostname
	I1018 10:31:35.850530  475082 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715182
	
	I1018 10:31:35.850610  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:35.869230  475082 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:35.869556  475082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1018 10:31:35.869581  475082 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-715182' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-715182/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-715182' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:31:36.019136  475082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:31:36.019212  475082 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:31:36.019261  475082 ubuntu.go:190] setting up certificates
	I1018 10:31:36.019298  475082 provision.go:84] configureAuth start
	I1018 10:31:36.019385  475082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-715182
	I1018 10:31:36.035766  475082 provision.go:143] copyHostCerts
	I1018 10:31:36.035840  475082 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:31:36.035849  475082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:31:36.035918  475082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:31:36.036012  475082 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:31:36.036017  475082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:31:36.036044  475082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:31:36.036093  475082 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:31:36.036097  475082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:31:36.036119  475082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:31:36.036162  475082 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-715182 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-715182 localhost minikube]
	I1018 10:31:36.781008  475082 provision.go:177] copyRemoteCerts
	I1018 10:31:36.781088  475082 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:31:36.781149  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:36.798066  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:36.901090  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:31:36.921286  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 10:31:36.944057  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:31:36.965699  475082 provision.go:87] duration metric: took 946.368739ms to configureAuth
	I1018 10:31:36.965793  475082 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:31:36.965971  475082 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:36.966073  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:36.987801  475082 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:36.988118  475082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1018 10:31:36.988132  475082 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:31:37.333460  475082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:31:37.333487  475082 machine.go:96] duration metric: took 4.848791847s to provisionDockerMachine
	I1018 10:31:37.333498  475082 client.go:171] duration metric: took 10.739240115s to LocalClient.Create
	I1018 10:31:37.333511  475082 start.go:167] duration metric: took 10.739370111s to libmachine.API.Create "default-k8s-diff-port-715182"
	I1018 10:31:37.333519  475082 start.go:293] postStartSetup for "default-k8s-diff-port-715182" (driver="docker")
	I1018 10:31:37.333529  475082 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:31:37.333592  475082 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:31:37.333660  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:37.362239  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:37.496694  475082 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:31:37.501499  475082 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:31:37.501525  475082 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:31:37.501536  475082 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:31:37.501596  475082 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:31:37.501678  475082 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:31:37.501790  475082 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:31:37.523965  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:37.560369  475082 start.go:296] duration metric: took 226.835563ms for postStartSetup
	I1018 10:31:37.560886  475082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-715182
	I1018 10:31:37.620419  475082 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/config.json ...
	I1018 10:31:37.620721  475082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:31:37.620761  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:37.674735  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:37.825535  475082 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:31:37.851240  475082 start.go:128] duration metric: took 11.260812369s to createHost
	I1018 10:31:37.851275  475082 start.go:83] releasing machines lock for "default-k8s-diff-port-715182", held for 11.260934906s
	I1018 10:31:37.851369  475082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-715182
	I1018 10:31:37.904900  475082 ssh_runner.go:195] Run: cat /version.json
	I1018 10:31:37.904954  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:37.905635  475082 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:31:37.905706  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:37.989442  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:38.001303  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:38.121667  475082 ssh_runner.go:195] Run: systemctl --version
	I1018 10:31:38.129021  475082 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:31:38.176037  475082 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:31:38.268772  475082 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:31:38.268844  475082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:31:38.300684  475082 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:31:38.300704  475082 start.go:495] detecting cgroup driver to use...
	I1018 10:31:38.300735  475082 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:31:38.300782  475082 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:31:38.319982  475082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:31:38.334000  475082 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:31:38.334060  475082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:31:38.352011  475082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:31:38.371922  475082 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:31:38.550261  475082 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:31:38.672267  475082 docker.go:234] disabling docker service ...
	I1018 10:31:38.672338  475082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:31:38.695526  475082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:31:38.708798  475082 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:31:38.822884  475082 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:31:38.950389  475082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:31:38.972241  475082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:31:39.006571  475082 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:31:39.006640  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.021120  475082 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:31:39.021280  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.039937  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.051677  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.065341  475082 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:31:39.080011  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.092689  475082 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.117339  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.128189  475082 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:31:39.139030  475082 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:31:39.147159  475082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:39.311832  475082 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:31:39.468970  475082 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:31:39.469063  475082 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:31:39.474150  475082 start.go:563] Will wait 60s for crictl version
	I1018 10:31:39.474228  475082 ssh_runner.go:195] Run: which crictl
	I1018 10:31:39.478267  475082 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:31:39.509156  475082 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:31:39.509253  475082 ssh_runner.go:195] Run: crio --version
	I1018 10:31:39.549865  475082 ssh_runner.go:195] Run: crio --version
	I1018 10:31:39.591010  475082 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:31:36.915643  475717 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-101897:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.197066874s)
	I1018 10:31:36.915675  475717 kic.go:203] duration metric: took 4.19722366s to extract preloaded images to volume ...
	W1018 10:31:36.915798  475717 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:31:36.915935  475717 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:31:36.998857  475717 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-101897 --name embed-certs-101897 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-101897 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-101897 --network embed-certs-101897 --ip 192.168.85.2 --volume embed-certs-101897:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:31:37.374988  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Running}}
	I1018 10:31:37.399186  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:31:37.423927  475717 cli_runner.go:164] Run: docker exec embed-certs-101897 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:31:37.474578  475717 oci.go:144] the created container "embed-certs-101897" has a running status.
	I1018 10:31:37.474615  475717 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa...
	I1018 10:31:38.409946  475717 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:31:38.437967  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:31:38.465685  475717 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:31:38.465705  475717 kic_runner.go:114] Args: [docker exec --privileged embed-certs-101897 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:31:38.535328  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:31:38.555277  475717 machine.go:93] provisionDockerMachine start ...
	I1018 10:31:38.555376  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:38.575204  475717 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:38.575555  475717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1018 10:31:38.575572  475717 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:31:38.576167  475717 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45284->127.0.0.1:33434: read: connection reset by peer
	I1018 10:31:39.593927  475082 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-715182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:31:39.610567  475082 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:31:39.614726  475082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:31:39.625311  475082 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-715182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:31:39.625429  475082 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:39.625501  475082 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:39.668765  475082 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:39.668787  475082 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:31:39.668843  475082 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:39.695643  475082 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:39.695667  475082 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:31:39.695675  475082 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1018 10:31:39.695769  475082 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-715182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:31:39.695854  475082 ssh_runner.go:195] Run: crio config
	I1018 10:31:39.765831  475082 cni.go:84] Creating CNI manager for ""
	I1018 10:31:39.765854  475082 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:31:39.765868  475082 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:31:39.765891  475082 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715182 NodeName:default-k8s-diff-port-715182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:31:39.766021  475082 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715182"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:31:39.766095  475082 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:31:39.774440  475082 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:31:39.774511  475082 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:31:39.781972  475082 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 10:31:39.794732  475082 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:31:39.807748  475082 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 10:31:39.820254  475082 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:31:39.823636  475082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:31:39.833113  475082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:39.940666  475082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:31:39.958156  475082 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182 for IP: 192.168.76.2
	I1018 10:31:39.958230  475082 certs.go:195] generating shared ca certs ...
	I1018 10:31:39.958260  475082 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:39.958431  475082 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:31:39.958506  475082 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:31:39.958537  475082 certs.go:257] generating profile certs ...
	I1018 10:31:39.958640  475082 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.key
	I1018 10:31:39.958681  475082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt with IP's: []
	I1018 10:31:40.624187  475082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt ...
	I1018 10:31:40.624223  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt: {Name:mkaf229aa28b7977eadb932ec5254ad5394152f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:40.624424  475082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.key ...
	I1018 10:31:40.624438  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.key: {Name:mk7fc6c9d595be8b0b890cddf15b543d6402cfeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:40.624543  475082 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d
	I1018 10:31:40.624564  475082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt.7b193c3d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 10:31:41.067211  475082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt.7b193c3d ...
	I1018 10:31:41.067253  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt.7b193c3d: {Name:mke2be9f248a0847223ebc620a34ed95ff627493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:41.067442  475082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d ...
	I1018 10:31:41.067459  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d: {Name:mk96f1bb534f441740de90d6e4e4637b836bbfcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:41.067543  475082 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt.7b193c3d -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt
	I1018 10:31:41.067624  475082 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key
	I1018 10:31:41.067682  475082 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key
	I1018 10:31:41.067703  475082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt with IP's: []
	I1018 10:31:41.729377  475717 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-101897
	
	I1018 10:31:41.729404  475717 ubuntu.go:182] provisioning hostname "embed-certs-101897"
	I1018 10:31:41.729466  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:41.756507  475717 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:41.756838  475717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1018 10:31:41.756854  475717 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-101897 && echo "embed-certs-101897" | sudo tee /etc/hostname
	I1018 10:31:41.929215  475717 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-101897
	
	I1018 10:31:41.929332  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:41.959113  475717 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:41.959529  475717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1018 10:31:41.959555  475717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-101897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-101897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-101897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:31:42.127062  475717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:31:42.127101  475717 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:31:42.127142  475717 ubuntu.go:190] setting up certificates
	I1018 10:31:42.127155  475717 provision.go:84] configureAuth start
	I1018 10:31:42.127237  475717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:31:42.158827  475717 provision.go:143] copyHostCerts
	I1018 10:31:42.158900  475717 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:31:42.158910  475717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:31:42.158997  475717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:31:42.159104  475717 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:31:42.159110  475717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:31:42.159137  475717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:31:42.159191  475717 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:31:42.159197  475717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:31:42.159220  475717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:31:42.159337  475717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.embed-certs-101897 san=[127.0.0.1 192.168.85.2 embed-certs-101897 localhost minikube]
	I1018 10:31:42.645265  475717 provision.go:177] copyRemoteCerts
	I1018 10:31:42.645330  475717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:31:42.645370  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:42.665170  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:42.778373  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:31:42.800764  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 10:31:42.826209  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 10:31:42.855572  475717 provision.go:87] duration metric: took 728.388861ms to configureAuth
	I1018 10:31:42.855643  475717 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:31:42.855872  475717 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:42.856065  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:42.876616  475717 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:42.876918  475717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1018 10:31:42.876934  475717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:31:43.191973  475717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:31:43.192046  475717 machine.go:96] duration metric: took 4.636745173s to provisionDockerMachine
	I1018 10:31:43.192071  475717 client.go:171] duration metric: took 11.668562362s to LocalClient.Create
	I1018 10:31:43.192117  475717 start.go:167] duration metric: took 11.668642363s to libmachine.API.Create "embed-certs-101897"
	I1018 10:31:43.192142  475717 start.go:293] postStartSetup for "embed-certs-101897" (driver="docker")
	I1018 10:31:43.192164  475717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:31:43.192255  475717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:31:43.192349  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:43.213097  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:43.321076  475717 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:31:43.325098  475717 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:31:43.325128  475717 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:31:43.325139  475717 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:31:43.325214  475717 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:31:43.325300  475717 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:31:43.325406  475717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:31:43.333002  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:43.350863  475717 start.go:296] duration metric: took 158.693537ms for postStartSetup
	I1018 10:31:43.351275  475717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:31:43.369618  475717 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json ...
	I1018 10:31:43.369890  475717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:31:43.369941  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:43.386784  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:43.490614  475717 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:31:43.495823  475717 start.go:128] duration metric: took 11.976570358s to createHost
	I1018 10:31:43.495847  475717 start.go:83] releasing machines lock for "embed-certs-101897", held for 11.97676898s
	I1018 10:31:43.495914  475717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:31:43.514818  475717 ssh_runner.go:195] Run: cat /version.json
	I1018 10:31:43.514870  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:43.515097  475717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:31:43.515176  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:43.549411  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:43.555836  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:43.669622  475717 ssh_runner.go:195] Run: systemctl --version
	I1018 10:31:43.768553  475717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:31:43.820670  475717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:31:43.826086  475717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:31:43.826168  475717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:31:43.857472  475717 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:31:43.857496  475717 start.go:495] detecting cgroup driver to use...
	I1018 10:31:43.857528  475717 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:31:43.857581  475717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:31:43.880557  475717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:31:43.895247  475717 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:31:43.895313  475717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:31:43.913474  475717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:31:43.934213  475717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:31:44.084100  475717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:31:44.251095  475717 docker.go:234] disabling docker service ...
	I1018 10:31:44.251234  475717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:31:44.279796  475717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:31:44.294667  475717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:31:44.445843  475717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:31:44.628940  475717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:31:44.643452  475717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:31:44.657675  475717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:31:44.657748  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.666274  475717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:31:44.666357  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.675135  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.683798  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.693093  475717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:31:44.702089  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.712075  475717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.727022  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.736803  475717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:31:44.745992  475717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:31:44.755470  475717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:44.896858  475717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:31:45.052123  475717 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:31:45.052219  475717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:31:45.073934  475717 start.go:563] Will wait 60s for crictl version
	I1018 10:31:45.074011  475717 ssh_runner.go:195] Run: which crictl
	I1018 10:31:45.079522  475717 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:31:45.116726  475717 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:31:45.116838  475717 ssh_runner.go:195] Run: crio --version
	I1018 10:31:45.181915  475717 ssh_runner.go:195] Run: crio --version
	I1018 10:31:45.243791  475717 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:31:45.246766  475717 cli_runner.go:164] Run: docker network inspect embed-certs-101897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:31:45.270492  475717 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:31:45.275804  475717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:31:45.290285  475717 kubeadm.go:883] updating cluster {Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:31:45.290431  475717 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:45.290507  475717 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:45.341639  475717 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:45.341668  475717 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:31:45.341732  475717 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:45.386304  475717 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:45.386366  475717 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:31:45.386376  475717 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:31:45.386594  475717 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-101897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:31:45.386922  475717 ssh_runner.go:195] Run: crio config
	I1018 10:31:45.482822  475717 cni.go:84] Creating CNI manager for ""
	I1018 10:31:45.482847  475717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:31:45.482861  475717 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:31:45.482884  475717 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-101897 NodeName:embed-certs-101897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:31:45.483047  475717 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-101897"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:31:45.483128  475717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:31:45.494906  475717 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:31:45.494988  475717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:31:45.505670  475717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 10:31:45.524397  475717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:31:45.539329  475717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 10:31:45.553905  475717 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:31:45.558781  475717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:31:45.569258  475717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:45.703066  475717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:31:45.720264  475717 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897 for IP: 192.168.85.2
	I1018 10:31:45.720287  475717 certs.go:195] generating shared ca certs ...
	I1018 10:31:45.720320  475717 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:45.720501  475717 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:31:45.720561  475717 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:31:45.720574  475717 certs.go:257] generating profile certs ...
	I1018 10:31:45.720638  475717 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.key
	I1018 10:31:45.720653  475717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.crt with IP's: []
	I1018 10:31:45.822491  475717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.crt ...
	I1018 10:31:45.822525  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.crt: {Name:mke9cef39cf3c9ed5958ddb0b28743026da2d659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:45.822716  475717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.key ...
	I1018 10:31:45.822732  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.key: {Name:mka7b069975e81726e52c31299137422d3fa2629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:45.822814  475717 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4
	I1018 10:31:45.822833  475717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt.cf2721a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 10:31:42.017884  475082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt ...
	I1018 10:31:42.017927  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt: {Name:mkf221e8f6c1d33743f02c6335617dce0ab9b1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:42.018129  475082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key ...
	I1018 10:31:42.018148  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key: {Name:mk8a8c8bbc1f2a62b28ec878ae60c144682cc40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:42.018347  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:31:42.018404  475082 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:31:42.018419  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:31:42.018447  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:31:42.018476  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:31:42.018503  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:31:42.018555  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:42.019194  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:31:42.045162  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:31:42.073900  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:31:42.102512  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:31:42.126229  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 10:31:42.156729  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:31:42.185112  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:31:42.221953  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:31:42.269240  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:31:42.307139  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:31:42.330964  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:31:42.349952  475082 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:31:42.363506  475082 ssh_runner.go:195] Run: openssl version
	I1018 10:31:42.370074  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:31:42.378750  475082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:31:42.382826  475082 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:31:42.382905  475082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:31:42.424391  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:31:42.433399  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:31:42.441406  475082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:42.445680  475082 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:42.445740  475082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:42.487887  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:31:42.496347  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:31:42.504551  475082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:31:42.508758  475082 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:31:42.508821  475082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:31:42.550874  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:31:42.559476  475082 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:31:42.564191  475082 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:31:42.564245  475082 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-715182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:42.564317  475082 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:31:42.564375  475082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:31:42.604149  475082 cri.go:89] found id: ""
	I1018 10:31:42.604297  475082 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:31:42.617019  475082 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:31:42.625838  475082 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:31:42.625912  475082 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:31:42.637073  475082 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:31:42.637090  475082 kubeadm.go:157] found existing configuration files:
	
	I1018 10:31:42.637146  475082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 10:31:42.646512  475082 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:31:42.646571  475082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:31:42.655146  475082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 10:31:42.664929  475082 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:31:42.664994  475082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:31:42.672993  475082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 10:31:42.681521  475082 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:31:42.681580  475082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:31:42.689396  475082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 10:31:42.697386  475082 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:31:42.697458  475082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:31:42.707691  475082 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:31:42.781658  475082 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 10:31:42.781938  475082 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:31:42.866408  475082 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:31:46.276517  475717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt.cf2721a4 ...
	I1018 10:31:46.276590  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt.cf2721a4: {Name:mk99f9dc25d745313d2c2dec6be440a6d27aebbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:46.276834  475717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4 ...
	I1018 10:31:46.276872  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4: {Name:mk6ac9eb27b775bc48282205d6d25f6ddb5fe0f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:46.277022  475717 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt.cf2721a4 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt
	I1018 10:31:46.277154  475717 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key
	I1018 10:31:46.277275  475717 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key
	I1018 10:31:46.277314  475717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt with IP's: []
	I1018 10:31:46.549386  475717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt ...
	I1018 10:31:46.549413  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt: {Name:mk46b0e0b0944a2fffa37e66f4ec5cc0467cacda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:46.549586  475717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key ...
	I1018 10:31:46.549595  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key: {Name:mk51360afea0ae5803d08bb52281db45b37f4bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:46.549764  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:31:46.549799  475717 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:31:46.549812  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:31:46.549836  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:31:46.549858  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:31:46.549878  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:31:46.549933  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:46.550502  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:31:46.571221  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:31:46.589047  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:31:46.615192  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:31:46.634147  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 10:31:46.656984  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:31:46.678077  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:31:46.728159  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:31:46.765109  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:31:46.786112  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:31:46.809924  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:31:46.832797  475717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:31:46.848216  475717 ssh_runner.go:195] Run: openssl version
	I1018 10:31:46.854733  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:31:46.863781  475717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:46.867904  475717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:46.868018  475717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:46.913854  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:31:46.927037  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:31:46.937416  475717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:31:46.942258  475717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:31:46.942377  475717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:31:46.985319  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:31:46.998766  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:31:47.013265  475717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:31:47.025517  475717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:31:47.025588  475717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:31:47.068051  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:31:47.076398  475717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:31:47.080192  475717 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:31:47.080244  475717 kubeadm.go:400] StartCluster: {Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:47.080317  475717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:31:47.080371  475717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:31:47.115364  475717 cri.go:89] found id: ""
	I1018 10:31:47.115441  475717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:31:47.129304  475717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:31:47.138179  475717 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:31:47.138243  475717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:31:47.149031  475717 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:31:47.149051  475717 kubeadm.go:157] found existing configuration files:
	
	I1018 10:31:47.149104  475717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 10:31:47.158366  475717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:31:47.158430  475717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:31:47.166328  475717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 10:31:47.175030  475717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:31:47.175095  475717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:31:47.183345  475717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 10:31:47.191919  475717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:31:47.191982  475717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:31:47.200188  475717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 10:31:47.208958  475717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:31:47.209022  475717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:31:47.217446  475717 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:31:47.265409  475717 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 10:31:47.265759  475717 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:31:47.307239  475717 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:31:47.307324  475717 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:31:47.307366  475717 kubeadm.go:318] OS: Linux
	I1018 10:31:47.307418  475717 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:31:47.307473  475717 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:31:47.307525  475717 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:31:47.307580  475717 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:31:47.307634  475717 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:31:47.307688  475717 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:31:47.307740  475717 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:31:47.307793  475717 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:31:47.307846  475717 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:31:47.413625  475717 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:31:47.413746  475717 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:31:47.413849  475717 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 10:31:47.457606  475717 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:31:47.463402  475717 out.go:252]   - Generating certificates and keys ...
	I1018 10:31:47.463503  475717 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:31:47.463580  475717 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:31:48.582996  475717 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:31:48.969562  475717 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:31:49.426304  475717 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:31:49.847458  475717 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:31:50.758192  475717 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:31:50.758542  475717 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-101897 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:31:51.765102  475717 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:31:51.765337  475717 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-101897 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:31:52.112713  475717 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:31:52.185474  475717 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:31:53.381558  475717 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:31:53.381640  475717 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:31:53.889550  475717 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:31:54.077561  475717 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 10:31:54.319757  475717 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:31:54.685159  475717 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:31:55.197590  475717 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:31:55.200398  475717 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:31:55.213572  475717 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:31:55.217316  475717 out.go:252]   - Booting up control plane ...
	I1018 10:31:55.217443  475717 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:31:55.217525  475717 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:31:55.217604  475717 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:31:55.262640  475717 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:31:55.263007  475717 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 10:31:55.276000  475717 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 10:31:55.287773  475717 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:31:55.287853  475717 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:31:55.498375  475717 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 10:31:55.498506  475717 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 10:31:58.001588  475717 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.500633833s
	I1018 10:31:58.002387  475717 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 10:31:58.002630  475717 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 10:31:58.002729  475717 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 10:31:58.002812  475717 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 10:32:04.865820  475082 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 10:32:04.865878  475082 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:32:04.865969  475082 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:32:04.866027  475082 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:32:04.866062  475082 kubeadm.go:318] OS: Linux
	I1018 10:32:04.866109  475082 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:32:04.866159  475082 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:32:04.866209  475082 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:32:04.866259  475082 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:32:04.866310  475082 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:32:04.866360  475082 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:32:04.866408  475082 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:32:04.866458  475082 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:32:04.866507  475082 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:32:04.866581  475082 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:32:04.866679  475082 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:32:04.866772  475082 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 10:32:04.866837  475082 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:32:04.869875  475082 out.go:252]   - Generating certificates and keys ...
	I1018 10:32:04.869978  475082 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:32:04.870047  475082 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:32:04.870117  475082 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:32:04.870177  475082 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:32:04.870245  475082 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:32:04.870298  475082 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:32:04.870355  475082 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:32:04.870493  475082 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-715182 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:32:04.870550  475082 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:32:04.870686  475082 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-715182 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:32:04.870754  475082 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:32:04.870831  475082 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:32:04.870879  475082 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:32:04.870938  475082 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:32:04.870990  475082 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:32:04.871050  475082 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 10:32:04.871109  475082 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:32:04.871175  475082 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:32:04.871233  475082 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:32:04.871318  475082 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:32:04.871387  475082 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:32:04.874425  475082 out.go:252]   - Booting up control plane ...
	I1018 10:32:04.874604  475082 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:32:04.874740  475082 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:32:04.874862  475082 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:32:04.875043  475082 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:32:04.875202  475082 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 10:32:04.875365  475082 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 10:32:04.875462  475082 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:32:04.875505  475082 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:32:04.875649  475082 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 10:32:04.875765  475082 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 10:32:04.875834  475082 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001803384s
	I1018 10:32:04.875936  475082 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 10:32:04.876026  475082 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1018 10:32:04.876124  475082 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 10:32:04.876211  475082 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 10:32:04.876295  475082 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.415755357s
	I1018 10:32:04.876369  475082 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.394818898s
	I1018 10:32:04.876454  475082 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.002194617s
	I1018 10:32:04.876572  475082 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:32:04.876712  475082 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:32:04.876785  475082 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:32:04.877002  475082 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-715182 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:32:04.877064  475082 kubeadm.go:318] [bootstrap-token] Using token: 1xbay4.ra29h3fawbyrwawj
	I1018 10:32:04.880133  475082 out.go:252]   - Configuring RBAC rules ...
	I1018 10:32:04.880269  475082 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:32:04.880397  475082 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:32:04.880555  475082 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:32:04.880697  475082 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:32:04.880825  475082 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:32:04.880920  475082 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:32:04.881049  475082 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:32:04.881098  475082 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:32:04.881149  475082 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:32:04.881154  475082 kubeadm.go:318] 
	I1018 10:32:04.881321  475082 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:32:04.881327  475082 kubeadm.go:318] 
	I1018 10:32:04.881413  475082 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:32:04.881417  475082 kubeadm.go:318] 
	I1018 10:32:04.881445  475082 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:32:04.881510  475082 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:32:04.881567  475082 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:32:04.881572  475082 kubeadm.go:318] 
	I1018 10:32:04.881632  475082 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:32:04.881636  475082 kubeadm.go:318] 
	I1018 10:32:04.881689  475082 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:32:04.881694  475082 kubeadm.go:318] 
	I1018 10:32:04.881752  475082 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:32:04.881835  475082 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:32:04.881911  475082 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:32:04.881915  475082 kubeadm.go:318] 
	I1018 10:32:04.882009  475082 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:32:04.882095  475082 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:32:04.882100  475082 kubeadm.go:318] 
	I1018 10:32:04.882194  475082 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 1xbay4.ra29h3fawbyrwawj \
	I1018 10:32:04.882309  475082 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:32:04.882332  475082 kubeadm.go:318] 	--control-plane 
	I1018 10:32:04.882337  475082 kubeadm.go:318] 
	I1018 10:32:04.882439  475082 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:32:04.882444  475082 kubeadm.go:318] 
	I1018 10:32:04.882535  475082 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 1xbay4.ra29h3fawbyrwawj \
	I1018 10:32:04.882664  475082 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:32:04.882673  475082 cni.go:84] Creating CNI manager for ""
	I1018 10:32:04.882680  475082 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:32:04.887196  475082 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:32:03.635817  475717 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.632797732s
	I1018 10:32:05.789740  475717 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.786604719s
	I1018 10:32:04.890279  475082 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:32:04.895282  475082 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 10:32:04.895304  475082 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:32:04.924040  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:32:05.611442  475082 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:32:05.611579  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:05.611663  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-715182 minikube.k8s.io/updated_at=2025_10_18T10_32_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=default-k8s-diff-port-715182 minikube.k8s.io/primary=true
	I1018 10:32:06.034410  475082 ops.go:34] apiserver oom_adj: -16
	I1018 10:32:06.034522  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:07.504260  475717 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.501509033s
	I1018 10:32:07.523967  475717 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:32:07.549625  475717 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:32:07.566172  475717 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:32:07.566426  475717 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-101897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:32:07.593126  475717 kubeadm.go:318] [bootstrap-token] Using token: q941ou.y2vfl8rz7u2y7kaa
	I1018 10:32:07.596146  475717 out.go:252]   - Configuring RBAC rules ...
	I1018 10:32:07.596336  475717 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:32:07.605004  475717 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:32:07.614253  475717 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:32:07.622617  475717 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:32:07.629829  475717 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:32:07.645014  475717 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:32:07.919086  475717 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:32:08.401299  475717 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:32:08.912234  475717 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:32:08.913524  475717 kubeadm.go:318] 
	I1018 10:32:08.913607  475717 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:32:08.913618  475717 kubeadm.go:318] 
	I1018 10:32:08.913700  475717 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:32:08.913711  475717 kubeadm.go:318] 
	I1018 10:32:08.913738  475717 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:32:08.913804  475717 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:32:08.913863  475717 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:32:08.913872  475717 kubeadm.go:318] 
	I1018 10:32:08.913929  475717 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:32:08.913938  475717 kubeadm.go:318] 
	I1018 10:32:08.913988  475717 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:32:08.913996  475717 kubeadm.go:318] 
	I1018 10:32:08.914052  475717 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:32:08.914135  475717 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:32:08.914212  475717 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:32:08.914220  475717 kubeadm.go:318] 
	I1018 10:32:08.914309  475717 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:32:08.914393  475717 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:32:08.914401  475717 kubeadm.go:318] 
	I1018 10:32:08.914500  475717 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token q941ou.y2vfl8rz7u2y7kaa \
	I1018 10:32:08.914612  475717 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:32:08.914637  475717 kubeadm.go:318] 	--control-plane 
	I1018 10:32:08.914646  475717 kubeadm.go:318] 
	I1018 10:32:08.914735  475717 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:32:08.914743  475717 kubeadm.go:318] 
	I1018 10:32:08.914829  475717 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token q941ou.y2vfl8rz7u2y7kaa \
	I1018 10:32:08.914939  475717 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:32:08.917607  475717 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 10:32:08.917857  475717 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:32:08.917973  475717 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:32:08.918051  475717 cni.go:84] Creating CNI manager for ""
	I1018 10:32:08.918086  475717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:32:08.923209  475717 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:32:06.534653  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:07.034691  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:07.535122  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:08.035460  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:08.535549  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:09.034692  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:09.534702  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.035318  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.276417  475082 kubeadm.go:1113] duration metric: took 4.664883185s to wait for elevateKubeSystemPrivileges
	I1018 10:32:10.276444  475082 kubeadm.go:402] duration metric: took 27.712204978s to StartCluster
	I1018 10:32:10.276461  475082 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:32:10.276522  475082 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:32:10.277281  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:32:10.277478  475082 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:32:10.277613  475082 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:32:10.277854  475082 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:32:10.277833  475082 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:32:10.277915  475082 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715182"
	I1018 10:32:10.277925  475082 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715182"
	I1018 10:32:10.277938  475082 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715182"
	I1018 10:32:10.277946  475082 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-715182"
	I1018 10:32:10.277970  475082 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:32:10.278243  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:32:10.278402  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:32:10.280827  475082 out.go:179] * Verifying Kubernetes components...
	I1018 10:32:10.284313  475082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:32:10.325879  475082 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-715182"
	I1018 10:32:10.325921  475082 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:32:10.326371  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:32:10.333912  475082 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:32:08.927059  475717 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:32:08.931638  475717 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 10:32:08.931659  475717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:32:08.958861  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:32:09.372119  475717 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:32:09.372245  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:09.372310  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-101897 minikube.k8s.io/updated_at=2025_10_18T10_32_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=embed-certs-101897 minikube.k8s.io/primary=true
	I1018 10:32:09.554209  475717 ops.go:34] apiserver oom_adj: -16
	I1018 10:32:09.554338  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.055097  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.555311  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:11.054355  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.337100  475082 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:32:10.337122  475082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:32:10.337205  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:32:10.353175  475082 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:32:10.353234  475082 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:32:10.353294  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:32:10.387134  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:32:10.395094  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:32:10.874468  475082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:32:10.891006  475082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:32:10.906078  475082 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:32:10.906278  475082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:32:11.774184  475082 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715182" to be "Ready" ...
	I1018 10:32:11.774474  475082 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 10:32:11.820953  475082 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 10:32:11.555095  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:12.054471  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:12.554856  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:13.054532  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:13.554456  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:13.661094  475717 kubeadm.go:1113] duration metric: took 4.288891615s to wait for elevateKubeSystemPrivileges
	I1018 10:32:13.661128  475717 kubeadm.go:402] duration metric: took 26.58088751s to StartCluster
	I1018 10:32:13.661146  475717 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:32:13.661250  475717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:32:13.662633  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:32:13.662878  475717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:32:13.662882  475717 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:32:13.663166  475717 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:32:13.663201  475717 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:32:13.663263  475717 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-101897"
	I1018 10:32:13.663277  475717 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-101897"
	I1018 10:32:13.663298  475717 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:32:13.663760  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:32:13.664120  475717 addons.go:69] Setting default-storageclass=true in profile "embed-certs-101897"
	I1018 10:32:13.664145  475717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-101897"
	I1018 10:32:13.664444  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:32:13.666672  475717 out.go:179] * Verifying Kubernetes components...
	I1018 10:32:13.673453  475717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:32:13.696580  475717 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:32:13.699403  475717 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:32:13.699426  475717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:32:13.699499  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:32:13.727031  475717 addons.go:238] Setting addon default-storageclass=true in "embed-certs-101897"
	I1018 10:32:13.727076  475717 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:32:13.727513  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:32:13.745295  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:32:13.762815  475717 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:32:13.762838  475717 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:32:13.762917  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:32:13.792320  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:32:14.025504  475717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:32:14.112087  475717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:32:14.112233  475717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:32:14.147653  475717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:32:14.645378  475717 node_ready.go:35] waiting up to 6m0s for node "embed-certs-101897" to be "Ready" ...
	I1018 10:32:14.645642  475717 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 10:32:14.835445  475717 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 10:32:14.838305  475717 addons.go:514] duration metric: took 1.175073755s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1018 10:32:15.152311  475717 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-101897" context rescaled to 1 replicas
	I1018 10:32:11.823687  475082 addons.go:514] duration metric: took 1.545847877s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 10:32:12.277598  475082 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-715182" context rescaled to 1 replicas
	W1018 10:32:13.779972  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:16.278128  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:16.649003  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:18.649496  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:18.776881  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:20.777766  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:21.148542  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:23.648242  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:25.648716  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:22.778682  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:25.277820  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:28.148212  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:30.148663  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:27.277882  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:29.777875  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:32.149130  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:34.649291  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:32.277109  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:34.778732  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:37.150232  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:39.648902  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:37.277135  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:39.777346  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:41.648943  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:44.148321  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:41.778175  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:43.778656  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:46.277951  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:46.148807  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:48.648633  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:48.778112  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:50.778274  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	I1018 10:32:51.277421  475082 node_ready.go:49] node "default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:51.277455  475082 node_ready.go:38] duration metric: took 39.503237928s for node "default-k8s-diff-port-715182" to be "Ready" ...
	I1018 10:32:51.277469  475082 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:32:51.277524  475082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:32:51.291846  475082 api_server.go:72] duration metric: took 41.014338044s to wait for apiserver process to appear ...
	I1018 10:32:51.291870  475082 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:32:51.291889  475082 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1018 10:32:51.302386  475082 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1018 10:32:51.303749  475082 api_server.go:141] control plane version: v1.34.1
	I1018 10:32:51.303777  475082 api_server.go:131] duration metric: took 11.899909ms to wait for apiserver health ...
	I1018 10:32:51.303787  475082 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:32:51.307620  475082 system_pods.go:59] 8 kube-system pods found
	I1018 10:32:51.307654  475082 system_pods.go:61] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:51.307662  475082 system_pods.go:61] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:51.307668  475082 system_pods.go:61] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:51.307677  475082 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:51.307682  475082 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:51.307695  475082 system_pods.go:61] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:51.307700  475082 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:51.307706  475082 system_pods.go:61] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:51.307719  475082 system_pods.go:74] duration metric: took 3.92612ms to wait for pod list to return data ...
	I1018 10:32:51.307728  475082 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:32:51.311778  475082 default_sa.go:45] found service account: "default"
	I1018 10:32:51.311803  475082 default_sa.go:55] duration metric: took 4.064369ms for default service account to be created ...
	I1018 10:32:51.311812  475082 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:32:51.315953  475082 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:51.316027  475082 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:51.316044  475082 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:51.316053  475082 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:51.316058  475082 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:51.316063  475082 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:51.316068  475082 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:51.316072  475082 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:51.316095  475082 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:51.316136  475082 retry.go:31] will retry after 250.86488ms: missing components: kube-dns
	I1018 10:32:51.571188  475082 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:51.571224  475082 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:51.571231  475082 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:51.571239  475082 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:51.571246  475082 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:51.571250  475082 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:51.571255  475082 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:51.571259  475082 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:51.571265  475082 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:51.571286  475082 retry.go:31] will retry after 243.388244ms: missing components: kube-dns
	I1018 10:32:51.820258  475082 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:51.820289  475082 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:51.820296  475082 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:51.820325  475082 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:51.820338  475082 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:51.820344  475082 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:51.820358  475082 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:51.820367  475082 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:51.820381  475082 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:51.820412  475082 retry.go:31] will retry after 473.612147ms: missing components: kube-dns
	I1018 10:32:52.298784  475082 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:52.298816  475082 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Running
	I1018 10:32:52.298823  475082 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:52.298830  475082 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:52.298856  475082 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:52.298872  475082 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:52.298877  475082 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:52.298882  475082 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:52.298886  475082 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Running
	I1018 10:32:52.298894  475082 system_pods.go:126] duration metric: took 987.07601ms to wait for k8s-apps to be running ...
	I1018 10:32:52.298908  475082 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:32:52.298971  475082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:32:52.313737  475082 system_svc.go:56] duration metric: took 14.819996ms WaitForService to wait for kubelet
	I1018 10:32:52.313768  475082 kubeadm.go:586] duration metric: took 42.036265766s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:32:52.313786  475082 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:32:52.316609  475082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:32:52.316641  475082 node_conditions.go:123] node cpu capacity is 2
	I1018 10:32:52.316655  475082 node_conditions.go:105] duration metric: took 2.863159ms to run NodePressure ...
	I1018 10:32:52.316668  475082 start.go:241] waiting for startup goroutines ...
	I1018 10:32:52.316676  475082 start.go:246] waiting for cluster config update ...
	I1018 10:32:52.316686  475082 start.go:255] writing updated cluster config ...
	I1018 10:32:52.316982  475082 ssh_runner.go:195] Run: rm -f paused
	I1018 10:32:52.320682  475082 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:32:52.324844  475082 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c2sb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.329812  475082 pod_ready.go:94] pod "coredns-66bc5c9577-c2sb5" is "Ready"
	I1018 10:32:52.329843  475082 pod_ready.go:86] duration metric: took 4.970314ms for pod "coredns-66bc5c9577-c2sb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.333026  475082 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.337800  475082 pod_ready.go:94] pod "etcd-default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:52.337827  475082 pod_ready.go:86] duration metric: took 4.773505ms for pod "etcd-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.340220  475082 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.345148  475082 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:52.345179  475082 pod_ready.go:86] duration metric: took 4.935073ms for pod "kube-apiserver-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.347627  475082 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.725323  475082 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:52.725353  475082 pod_ready.go:86] duration metric: took 377.699943ms for pod "kube-controller-manager-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.925643  475082 pod_ready.go:83] waiting for pod "kube-proxy-5whrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:53.324803  475082 pod_ready.go:94] pod "kube-proxy-5whrp" is "Ready"
	I1018 10:32:53.324879  475082 pod_ready.go:86] duration metric: took 399.209488ms for pod "kube-proxy-5whrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:53.525288  475082 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:53.925633  475082 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:53.925659  475082 pod_ready.go:86] duration metric: took 400.345583ms for pod "kube-scheduler-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:53.925673  475082 pod_ready.go:40] duration metric: took 1.604959783s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:32:53.991639  475082 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:32:53.995011  475082 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-715182" cluster and "default" namespace by default
	W1018 10:32:51.148947  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:53.648942  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	I1018 10:32:55.149307  475717 node_ready.go:49] node "embed-certs-101897" is "Ready"
	I1018 10:32:55.149340  475717 node_ready.go:38] duration metric: took 40.503928468s for node "embed-certs-101897" to be "Ready" ...
	I1018 10:32:55.149353  475717 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:32:55.149414  475717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:32:55.162144  475717 api_server.go:72] duration metric: took 41.499232613s to wait for apiserver process to appear ...
	I1018 10:32:55.162168  475717 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:32:55.162187  475717 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:32:55.170613  475717 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 10:32:55.171641  475717 api_server.go:141] control plane version: v1.34.1
	I1018 10:32:55.171664  475717 api_server.go:131] duration metric: took 9.489597ms to wait for apiserver health ...
	I1018 10:32:55.171673  475717 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:32:55.175040  475717 system_pods.go:59] 8 kube-system pods found
	I1018 10:32:55.175080  475717 system_pods.go:61] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:55.175088  475717 system_pods.go:61] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running
	I1018 10:32:55.175094  475717 system_pods.go:61] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:32:55.175099  475717 system_pods.go:61] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running
	I1018 10:32:55.175104  475717 system_pods.go:61] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running
	I1018 10:32:55.175109  475717 system_pods.go:61] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:32:55.175115  475717 system_pods.go:61] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running
	I1018 10:32:55.175121  475717 system_pods.go:61] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:55.175133  475717 system_pods.go:74] duration metric: took 3.453056ms to wait for pod list to return data ...
	I1018 10:32:55.175144  475717 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:32:55.177684  475717 default_sa.go:45] found service account: "default"
	I1018 10:32:55.177709  475717 default_sa.go:55] duration metric: took 2.55781ms for default service account to be created ...
	I1018 10:32:55.177718  475717 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:32:55.180639  475717 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:55.180676  475717 system_pods.go:89] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:55.180685  475717 system_pods.go:89] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running
	I1018 10:32:55.180719  475717 system_pods.go:89] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:32:55.180733  475717 system_pods.go:89] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running
	I1018 10:32:55.180739  475717 system_pods.go:89] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running
	I1018 10:32:55.180743  475717 system_pods.go:89] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:32:55.180749  475717 system_pods.go:89] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running
	I1018 10:32:55.180758  475717 system_pods.go:89] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:55.180791  475717 retry.go:31] will retry after 290.015189ms: missing components: kube-dns
	I1018 10:32:55.475880  475717 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:55.475972  475717 system_pods.go:89] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:55.475993  475717 system_pods.go:89] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running
	I1018 10:32:55.476013  475717 system_pods.go:89] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:32:55.476035  475717 system_pods.go:89] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running
	I1018 10:32:55.476061  475717 system_pods.go:89] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running
	I1018 10:32:55.476078  475717 system_pods.go:89] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:32:55.476106  475717 system_pods.go:89] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running
	I1018 10:32:55.476137  475717 system_pods.go:89] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:55.476166  475717 retry.go:31] will retry after 381.532323ms: missing components: kube-dns
	I1018 10:32:55.862882  475717 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:55.862913  475717 system_pods.go:89] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:55.862923  475717 system_pods.go:89] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running
	I1018 10:32:55.862929  475717 system_pods.go:89] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:32:55.862934  475717 system_pods.go:89] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running
	I1018 10:32:55.862938  475717 system_pods.go:89] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running
	I1018 10:32:55.862942  475717 system_pods.go:89] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:32:55.862946  475717 system_pods.go:89] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running
	I1018 10:32:55.862949  475717 system_pods.go:89] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Running
	I1018 10:32:55.862957  475717 system_pods.go:126] duration metric: took 685.232981ms to wait for k8s-apps to be running ...
	I1018 10:32:55.862965  475717 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:32:55.863049  475717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:32:55.875973  475717 system_svc.go:56] duration metric: took 12.99867ms WaitForService to wait for kubelet
	I1018 10:32:55.876043  475717 kubeadm.go:586] duration metric: took 42.213135249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:32:55.876070  475717 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:32:55.879162  475717 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:32:55.879198  475717 node_conditions.go:123] node cpu capacity is 2
	I1018 10:32:55.879213  475717 node_conditions.go:105] duration metric: took 3.13636ms to run NodePressure ...
	I1018 10:32:55.879225  475717 start.go:241] waiting for startup goroutines ...
	I1018 10:32:55.879234  475717 start.go:246] waiting for cluster config update ...
	I1018 10:32:55.879245  475717 start.go:255] writing updated cluster config ...
	I1018 10:32:55.879522  475717 ssh_runner.go:195] Run: rm -f paused
	I1018 10:32:55.883001  475717 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:32:55.886681  475717 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hxrmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.892072  475717 pod_ready.go:94] pod "coredns-66bc5c9577-hxrmf" is "Ready"
	I1018 10:32:56.892105  475717 pod_ready.go:86] duration metric: took 1.005395763s for pod "coredns-66bc5c9577-hxrmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.895044  475717 pod_ready.go:83] waiting for pod "etcd-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.899125  475717 pod_ready.go:94] pod "etcd-embed-certs-101897" is "Ready"
	I1018 10:32:56.899149  475717 pod_ready.go:86] duration metric: took 4.077202ms for pod "etcd-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.901441  475717 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.906117  475717 pod_ready.go:94] pod "kube-apiserver-embed-certs-101897" is "Ready"
	I1018 10:32:56.906143  475717 pod_ready.go:86] duration metric: took 4.682648ms for pod "kube-apiserver-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.908874  475717 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:57.091345  475717 pod_ready.go:94] pod "kube-controller-manager-embed-certs-101897" is "Ready"
	I1018 10:32:57.091369  475717 pod_ready.go:86] duration metric: took 182.470232ms for pod "kube-controller-manager-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:57.290915  475717 pod_ready.go:83] waiting for pod "kube-proxy-bp45x" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:57.691020  475717 pod_ready.go:94] pod "kube-proxy-bp45x" is "Ready"
	I1018 10:32:57.691051  475717 pod_ready.go:86] duration metric: took 400.11253ms for pod "kube-proxy-bp45x" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:57.890767  475717 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:58.290375  475717 pod_ready.go:94] pod "kube-scheduler-embed-certs-101897" is "Ready"
	I1018 10:32:58.290405  475717 pod_ready.go:86] duration metric: took 399.604222ms for pod "kube-scheduler-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:58.290417  475717 pod_ready.go:40] duration metric: took 2.40738455s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:32:58.347215  475717 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:32:58.350765  475717 out.go:179] * Done! kubectl is now configured to use "embed-certs-101897" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 10:32:51 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:51.381885102Z" level=info msg="Created container 770771df3b63326a9c3fb033efb70ecff4ef9c038161c01e01014760d311aa72: kube-system/coredns-66bc5c9577-c2sb5/coredns" id=c3d89982-c6f7-429e-94be-9c14698c31cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:32:51 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:51.382916349Z" level=info msg="Starting container: 770771df3b63326a9c3fb033efb70ecff4ef9c038161c01e01014760d311aa72" id=13c83c62-301b-4845-b0e4-68a9abfd233f name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:32:51 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:51.38644317Z" level=info msg="Started container" PID=1745 containerID=770771df3b63326a9c3fb033efb70ecff4ef9c038161c01e01014760d311aa72 description=kube-system/coredns-66bc5c9577-c2sb5/coredns id=13c83c62-301b-4845-b0e4-68a9abfd233f name=/runtime.v1.RuntimeService/StartContainer sandboxID=98015c0ec0ed8bdb1159b2e4a30f14f865e27e544db16621340f9bf11700f21b
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.534541042Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7f4bce72-3857-4623-9c27-3e56efab46eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.534618056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.539901243Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ef9d038c42594fad057f40e071203e768b5a8abf65a132ca50fe146f16f36309 UID:6a6ba823-a995-4243-bfa2-29e841489887 NetNS:/var/run/netns/6a544447-923a-4326-a385-2642f7bcf78e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40002ae4f0}] Aliases:map[]}"
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.540073585Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.560543632Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ef9d038c42594fad057f40e071203e768b5a8abf65a132ca50fe146f16f36309 UID:6a6ba823-a995-4243-bfa2-29e841489887 NetNS:/var/run/netns/6a544447-923a-4326-a385-2642f7bcf78e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40002ae4f0}] Aliases:map[]}"
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.560899624Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.56958689Z" level=info msg="Ran pod sandbox ef9d038c42594fad057f40e071203e768b5a8abf65a132ca50fe146f16f36309 with infra container: default/busybox/POD" id=7f4bce72-3857-4623-9c27-3e56efab46eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.570978092Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=52bd9e20-1adb-466e-ae33-cb2aec699bb2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.571110516Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=52bd9e20-1adb-466e-ae33-cb2aec699bb2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.571155538Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=52bd9e20-1adb-466e-ae33-cb2aec699bb2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.572439497Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=036133a8-300e-4fb0-8ddd-4dcce49b42a5 name=/runtime.v1.ImageService/PullImage
	Oct 18 10:32:54 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:54.575406042Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.774083838Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=036133a8-300e-4fb0-8ddd-4dcce49b42a5 name=/runtime.v1.ImageService/PullImage
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.774743176Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=adca6649-be28-4c01-9adc-da5bf21f18e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.7787597Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cbb3052c-8575-4319-a31a-97ed70bff84a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.784796864Z" level=info msg="Creating container: default/busybox/busybox" id=6785d79a-8d49-4106-8fa2-f29a73cfca82 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.785742096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.793548747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.794096149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.808459912Z" level=info msg="Created container c32e0e3839c7f19304b636614d91b1c283a66198715c3e3728a49c21ca0e7604: default/busybox/busybox" id=6785d79a-8d49-4106-8fa2-f29a73cfca82 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.809478195Z" level=info msg="Starting container: c32e0e3839c7f19304b636614d91b1c283a66198715c3e3728a49c21ca0e7604" id=837c65f6-de9f-499a-ae35-df0fcd912c01 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:32:56 default-k8s-diff-port-715182 crio[834]: time="2025-10-18T10:32:56.8115112Z" level=info msg="Started container" PID=1799 containerID=c32e0e3839c7f19304b636614d91b1c283a66198715c3e3728a49c21ca0e7604 description=default/busybox/busybox id=837c65f6-de9f-499a-ae35-df0fcd912c01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef9d038c42594fad057f40e071203e768b5a8abf65a132ca50fe146f16f36309
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	c32e0e3839c7f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 seconds ago        Running             busybox                   0                   ef9d038c42594       busybox                                                default
	770771df3b633       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   98015c0ec0ed8       coredns-66bc5c9577-c2sb5                               kube-system
	b99f882b72315       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   e6e8212227f7f       storage-provisioner                                    kube-system
	24ed9f05a192b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   67e2120a5e071       kube-proxy-5whrp                                       kube-system
	fd9557441e2ee       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   08cdaaeeef011       kindnet-zd5md                                          kube-system
	2ea413c004c68       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   ddebd790893d8       kube-apiserver-default-k8s-diff-port-715182            kube-system
	2075a30294904       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   78a325b7755f5       kube-scheduler-default-k8s-diff-port-715182            kube-system
	fe9ccc45f498f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   5a0c5b9897605       kube-controller-manager-default-k8s-diff-port-715182   kube-system
	8acdfd9a2c075       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   f190a12eb8d38       etcd-default-k8s-diff-port-715182                      kube-system
	
	
	==> coredns [770771df3b63326a9c3fb033efb70ecff4ef9c038161c01e01014760d311aa72] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48095 - 56777 "HINFO IN 9223317076654135433.5970921263171788410. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040383787s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-715182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-715182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=default-k8s-diff-port-715182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_32_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:32:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-715182
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:32:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:32:55 +0000   Sat, 18 Oct 2025 10:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:32:55 +0000   Sat, 18 Oct 2025 10:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:32:55 +0000   Sat, 18 Oct 2025 10:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:32:55 +0000   Sat, 18 Oct 2025 10:32:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-715182
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c53d6dae-7a14-4045-ac49-41d96155b5e4
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-c2sb5                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     53s
	  kube-system                 etcd-default-k8s-diff-port-715182                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-zd5md                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-715182             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-715182    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-5whrp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-715182             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                node-controller  Node default-k8s-diff-port-715182 event: Registered Node default-k8s-diff-port-715182 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-715182 status is now: NodeReady
	
	
	==> dmesg <==
	[ +35.463301] overlayfs: idmapped layers are currently not supported
	[Oct18 10:11] overlayfs: idmapped layers are currently not supported
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8acdfd9a2c07572357494a1579e14ab34b7cad59f826bf2683c69c0095d02ec6] <==
	{"level":"warn","ts":"2025-10-18T10:31:58.043868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.067947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.087161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.101230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.118681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.180333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.208172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.234387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.253596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.287175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.347439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.360779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.386043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.427847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.473548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.525370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.586231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.619809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.649977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.682592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.726521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.776143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.809509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.852607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:31:58.989377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53660","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:33:04 up  2:15,  0 user,  load average: 3.75, 3.94, 3.09
	Linux default-k8s-diff-port-715182 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fd9557441e2ee59a8ece2dd3733e219d51c4430743ce878c46cf60eb4780e1c6] <==
	I1018 10:32:10.676687       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:32:10.710930       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:32:10.711088       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:32:10.711101       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:32:10.711115       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:32:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:32:10.914984       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:32:10.915001       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:32:10.915009       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:32:10.915642       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:32:40.915065       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 10:32:40.916200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:32:40.916282       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 10:32:40.916307       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 10:32:42.116030       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:32:42.116073       1 metrics.go:72] Registering metrics
	I1018 10:32:42.116162       1 controller.go:711] "Syncing nftables rules"
	I1018 10:32:50.922265       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:32:50.922325       1 main.go:301] handling current node
	I1018 10:33:00.917295       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:33:00.917389       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2ea413c004c68fdce0cfa14a24d5eca23fae33673caeb94a3284abb0f5e2f991] <==
	I1018 10:32:01.037298       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:32:01.048285       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 10:32:01.126545       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:32:01.126667       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 10:32:01.186862       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:32:01.187012       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:32:01.262763       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:32:01.645403       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 10:32:01.674176       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 10:32:01.695430       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:32:02.899870       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:32:02.962767       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:32:03.095745       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 10:32:03.108001       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 10:32:03.109169       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:32:03.115311       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:32:03.962279       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:32:04.319903       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:32:04.343135       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 10:32:04.356481       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 10:32:09.115133       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:32:09.947500       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 10:32:10.019552       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:32:10.030554       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 10:33:02.364992       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:34472: use of closed network connection
	
	
	==> kube-controller-manager [fe9ccc45f498f7160b29f4e0910ac4177eaede81787a59cf655aa90592ecc9ce] <==
	I1018 10:32:08.999738       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:32:08.999745       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:32:09.005745       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 10:32:09.005839       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:32:09.006902       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 10:32:09.006959       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 10:32:09.006981       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 10:32:09.006995       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 10:32:09.007000       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 10:32:09.009443       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:32:09.017847       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:32:09.018185       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:32:09.020158       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:32:09.034496       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-715182" podCIDRs=["10.244.0.0/24"]
	I1018 10:32:09.042212       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 10:32:09.042303       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 10:32:09.042391       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 10:32:09.042434       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 10:32:09.042168       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 10:32:09.042544       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-715182"
	I1018 10:32:09.042688       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 10:32:09.042957       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:32:09.060069       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 10:32:09.060567       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:32:54.049403       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [24ed9f05a192bab6c4d2a6c5f9461f527b1575048f5ad0cacce348aec3e93da0] <==
	I1018 10:32:10.884657       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:32:11.002462       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:32:11.114917       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:32:11.114952       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:32:11.115031       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:32:11.170398       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:32:11.170456       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:32:11.189150       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:32:11.190713       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:32:11.190735       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:32:11.192468       1 config.go:200] "Starting service config controller"
	I1018 10:32:11.192480       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:32:11.192504       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:32:11.192509       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:32:11.192524       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:32:11.192528       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:32:11.195293       1 config.go:309] "Starting node config controller"
	I1018 10:32:11.195311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:32:11.195320       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:32:11.293539       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:32:11.293587       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 10:32:11.293636       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2075a30294904bfb2fdc88e3848cd95e2b0d7fd7f1446fdcdcf5a98dbf15b1b1] <==
	I1018 10:32:01.802626       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:32:01.817825       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:32:01.817873       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:32:01.845995       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1018 10:32:01.854386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 10:32:01.854538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 10:32:01.854802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 10:32:01.854898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 10:32:01.855012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 10:32:01.855123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 10:32:01.855211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 10:32:01.855638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 10:32:01.855709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 10:32:01.855759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 10:32:01.855913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 10:32:01.855958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 10:32:01.856006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 10:32:01.856041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 10:32:01.856101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 10:32:01.857647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 10:32:01.857776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 10:32:01.857935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 10:32:01.858031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 10:32:02.709598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 10:32:05.247979       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:32:09 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:09.126955    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.011726    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9eba0a5-422b-4250-b9b3-087619a17e95-lib-modules\") pod \"kindnet-zd5md\" (UID: \"e9eba0a5-422b-4250-b9b3-087619a17e95\") " pod="kube-system/kindnet-zd5md"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.012052    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e9eba0a5-422b-4250-b9b3-087619a17e95-cni-cfg\") pod \"kindnet-zd5md\" (UID: \"e9eba0a5-422b-4250-b9b3-087619a17e95\") " pod="kube-system/kindnet-zd5md"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.012179    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9eba0a5-422b-4250-b9b3-087619a17e95-xtables-lock\") pod \"kindnet-zd5md\" (UID: \"e9eba0a5-422b-4250-b9b3-087619a17e95\") " pod="kube-system/kindnet-zd5md"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.012278    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pqkv\" (UniqueName: \"kubernetes.io/projected/e9eba0a5-422b-4250-b9b3-087619a17e95-kube-api-access-5pqkv\") pod \"kindnet-zd5md\" (UID: \"e9eba0a5-422b-4250-b9b3-087619a17e95\") " pod="kube-system/kindnet-zd5md"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.115205    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b69ab6c-f661-4b7a-92ce-157440319945-kube-proxy\") pod \"kube-proxy-5whrp\" (UID: \"0b69ab6c-f661-4b7a-92ce-157440319945\") " pod="kube-system/kube-proxy-5whrp"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.115385    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b69ab6c-f661-4b7a-92ce-157440319945-lib-modules\") pod \"kube-proxy-5whrp\" (UID: \"0b69ab6c-f661-4b7a-92ce-157440319945\") " pod="kube-system/kube-proxy-5whrp"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.115455    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7dt\" (UniqueName: \"kubernetes.io/projected/0b69ab6c-f661-4b7a-92ce-157440319945-kube-api-access-sj7dt\") pod \"kube-proxy-5whrp\" (UID: \"0b69ab6c-f661-4b7a-92ce-157440319945\") " pod="kube-system/kube-proxy-5whrp"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.115516    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b69ab6c-f661-4b7a-92ce-157440319945-xtables-lock\") pod \"kube-proxy-5whrp\" (UID: \"0b69ab6c-f661-4b7a-92ce-157440319945\") " pod="kube-system/kube-proxy-5whrp"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.153909    1311 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: W1018 10:32:10.344050    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/crio-08cdaaeeef011e168af169e1a002c17ad9e964fab1671b9fca170dc3405d2221 WatchSource:0}: Error finding container 08cdaaeeef011e168af169e1a002c17ad9e964fab1671b9fca170dc3405d2221: Status 404 returned error can't find the container with id 08cdaaeeef011e168af169e1a002c17ad9e964fab1671b9fca170dc3405d2221
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: W1018 10:32:10.371964    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/crio-67e2120a5e071cdba2659b8f27ac81387762442b506d2b868700e5b21885d3fa WatchSource:0}: Error finding container 67e2120a5e071cdba2659b8f27ac81387762442b506d2b868700e5b21885d3fa: Status 404 returned error can't find the container with id 67e2120a5e071cdba2659b8f27ac81387762442b506d2b868700e5b21885d3fa
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.789750    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5whrp" podStartSLOduration=1.789715174 podStartE2EDuration="1.789715174s" podCreationTimestamp="2025-10-18 10:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:32:10.789476239 +0000 UTC m=+6.535499561" watchObservedRunningTime="2025-10-18 10:32:10.789715174 +0000 UTC m=+6.535738488"
	Oct 18 10:32:10 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:10.836761    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zd5md" podStartSLOduration=1.8367416300000001 podStartE2EDuration="1.83674163s" podCreationTimestamp="2025-10-18 10:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:32:10.836532496 +0000 UTC m=+6.582555818" watchObservedRunningTime="2025-10-18 10:32:10.83674163 +0000 UTC m=+6.582764944"
	Oct 18 10:32:50 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:50.958437    1311 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 10:32:51 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:51.139551    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bf09318-3195-4ef2-a555-c4c945efa126-config-volume\") pod \"coredns-66bc5c9577-c2sb5\" (UID: \"2bf09318-3195-4ef2-a555-c4c945efa126\") " pod="kube-system/coredns-66bc5c9577-c2sb5"
	Oct 18 10:32:51 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:51.139606    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4e374f22-b5d4-4fc3-9c49-c35310ff348e-tmp\") pod \"storage-provisioner\" (UID: \"4e374f22-b5d4-4fc3-9c49-c35310ff348e\") " pod="kube-system/storage-provisioner"
	Oct 18 10:32:51 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:51.139634    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfbbv\" (UniqueName: \"kubernetes.io/projected/4e374f22-b5d4-4fc3-9c49-c35310ff348e-kube-api-access-lfbbv\") pod \"storage-provisioner\" (UID: \"4e374f22-b5d4-4fc3-9c49-c35310ff348e\") " pod="kube-system/storage-provisioner"
	Oct 18 10:32:51 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:51.139662    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nvds\" (UniqueName: \"kubernetes.io/projected/2bf09318-3195-4ef2-a555-c4c945efa126-kube-api-access-6nvds\") pod \"coredns-66bc5c9577-c2sb5\" (UID: \"2bf09318-3195-4ef2-a555-c4c945efa126\") " pod="kube-system/coredns-66bc5c9577-c2sb5"
	Oct 18 10:32:51 default-k8s-diff-port-715182 kubelet[1311]: W1018 10:32:51.311393    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/crio-e6e8212227f7f0e7ecd69f1148531d5bad1cb0aefd0109b80ef35dc6d115923a WatchSource:0}: Error finding container e6e8212227f7f0e7ecd69f1148531d5bad1cb0aefd0109b80ef35dc6d115923a: Status 404 returned error can't find the container with id e6e8212227f7f0e7ecd69f1148531d5bad1cb0aefd0109b80ef35dc6d115923a
	Oct 18 10:32:51 default-k8s-diff-port-715182 kubelet[1311]: W1018 10:32:51.346335    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/crio-98015c0ec0ed8bdb1159b2e4a30f14f865e27e544db16621340f9bf11700f21b WatchSource:0}: Error finding container 98015c0ec0ed8bdb1159b2e4a30f14f865e27e544db16621340f9bf11700f21b: Status 404 returned error can't find the container with id 98015c0ec0ed8bdb1159b2e4a30f14f865e27e544db16621340f9bf11700f21b
	Oct 18 10:32:51 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:51.848523    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c2sb5" podStartSLOduration=41.848505851 podStartE2EDuration="41.848505851s" podCreationTimestamp="2025-10-18 10:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:32:51.848415414 +0000 UTC m=+47.594438744" watchObservedRunningTime="2025-10-18 10:32:51.848505851 +0000 UTC m=+47.594529165"
	Oct 18 10:32:51 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:51.866860    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.866843601 podStartE2EDuration="40.866843601s" podCreationTimestamp="2025-10-18 10:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:32:51.866776516 +0000 UTC m=+47.612799838" watchObservedRunningTime="2025-10-18 10:32:51.866843601 +0000 UTC m=+47.612866923"
	Oct 18 10:32:54 default-k8s-diff-port-715182 kubelet[1311]: I1018 10:32:54.363318    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xf54\" (UniqueName: \"kubernetes.io/projected/6a6ba823-a995-4243-bfa2-29e841489887-kube-api-access-5xf54\") pod \"busybox\" (UID: \"6a6ba823-a995-4243-bfa2-29e841489887\") " pod="default/busybox"
	Oct 18 10:32:54 default-k8s-diff-port-715182 kubelet[1311]: W1018 10:32:54.568359    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/crio-ef9d038c42594fad057f40e071203e768b5a8abf65a132ca50fe146f16f36309 WatchSource:0}: Error finding container ef9d038c42594fad057f40e071203e768b5a8abf65a132ca50fe146f16f36309: Status 404 returned error can't find the container with id ef9d038c42594fad057f40e071203e768b5a8abf65a132ca50fe146f16f36309
	
	
	==> storage-provisioner [b99f882b72315ff36bcaac082fe9504b8f0561c8742b5b42439c7bf1df978ce9] <==
	I1018 10:32:51.396163       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 10:32:51.423073       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:32:51.423136       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 10:32:51.426197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:51.451305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:32:51.451446       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:32:51.451633       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715182_8f72e60d-4c14-49b5-8a11-ebef0767e63b!
	I1018 10:32:51.453363       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d2469c43-20ab-4e8a-ab93-03156d0280d3", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-715182_8f72e60d-4c14-49b5-8a11-ebef0767e63b became leader
	W1018 10:32:51.453505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:51.462282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:32:51.552085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715182_8f72e60d-4c14-49b5-8a11-ebef0767e63b!
	W1018 10:32:53.465686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:53.470722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:55.474746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:55.488220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:57.490959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:57.498952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:59.502046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:59.506720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:01.510360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:01.514780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:03.518483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:03.524578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-715182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.357505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:33:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-101897 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-101897 describe deploy/metrics-server -n kube-system: exit status 1 (76.543223ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-101897 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-101897
helpers_test.go:243: (dbg) docker inspect embed-certs-101897:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6",
	        "Created": "2025-10-18T10:31:37.027393759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:31:37.095157132Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/hosts",
	        "LogPath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6-json.log",
	        "Name": "/embed-certs-101897",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-101897:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-101897",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6",
	                "LowerDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-101897",
	                "Source": "/var/lib/docker/volumes/embed-certs-101897/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-101897",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-101897",
	                "name.minikube.sigs.k8s.io": "embed-certs-101897",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f94c85f27e7331e3f9071a7092c9cc79bd05c489dac57034abbe6f1b6e0a1be",
	            "SandboxKey": "/var/run/docker/netns/6f94c85f27e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-101897": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:d1:39:7a:b1:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6f55db3cc24016b7f9a5c2a2cb317625e0e1d0053a68e0f05bbc6f3ae8ab71a",
	                    "EndpointID": "910f310636d8e789401efa0243dfcf1a220f3cb6057b042243b092bfc3d48b2d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-101897",
	                        "a8859be818ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-101897 -n embed-certs-101897
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-101897 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-101897 logs -n 25: (1.201982734s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-881658 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-881658                │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-881658 sudo crio config                                                                                                                                                                                                             │ cilium-881658                │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │                     │
	│ delete  │ -p cilium-881658                                                                                                                                                                                                                              │ cilium-881658                │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:27 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:27 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-360583                                                                                                                                                                                                                   │ force-systemd-env-360583     │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-233372 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-233372 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-233372 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-233372                                                                                                                                                                                                                        │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │                     │
	│ stop    │ -p old-k8s-version-309062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │ 18 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-309062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-309062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-309062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-733799                                                                                                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-715182 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:31:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:31:31.108759  475717 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:31:31.108957  475717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:31.108970  475717 out.go:374] Setting ErrFile to fd 2...
	I1018 10:31:31.108975  475717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:31:31.109285  475717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:31:31.109869  475717 out.go:368] Setting JSON to false
	I1018 10:31:31.111136  475717 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8042,"bootTime":1760775450,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:31:31.111211  475717 start.go:141] virtualization:  
	I1018 10:31:31.126546  475717 out.go:179] * [embed-certs-101897] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:31:31.158137  475717 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:31:31.158142  475717 notify.go:220] Checking for updates...
	I1018 10:31:31.190516  475717 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:31:31.223881  475717 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:31:31.230364  475717 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:31:31.235684  475717 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:31:31.245138  475717 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:31:26.593781  475082 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:31:26.594206  475082 start.go:159] libmachine.API.Create for "default-k8s-diff-port-715182" (driver="docker")
	I1018 10:31:26.594251  475082 client.go:168] LocalClient.Create starting
	I1018 10:31:26.594454  475082 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:31:26.594556  475082 main.go:141] libmachine: Decoding PEM data...
	I1018 10:31:26.594572  475082 main.go:141] libmachine: Parsing certificate...
	I1018 10:31:26.594669  475082 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:31:26.594714  475082 main.go:141] libmachine: Decoding PEM data...
	I1018 10:31:26.594729  475082 main.go:141] libmachine: Parsing certificate...
	I1018 10:31:26.595162  475082 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-715182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:31:26.615046  475082 cli_runner.go:211] docker network inspect default-k8s-diff-port-715182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:31:26.615136  475082 network_create.go:284] running [docker network inspect default-k8s-diff-port-715182] to gather additional debugging logs...
	I1018 10:31:26.615156  475082 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-715182
	W1018 10:31:26.635596  475082 cli_runner.go:211] docker network inspect default-k8s-diff-port-715182 returned with exit code 1
	I1018 10:31:26.635628  475082 network_create.go:287] error running [docker network inspect default-k8s-diff-port-715182]: docker network inspect default-k8s-diff-port-715182: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-715182 not found
	I1018 10:31:26.635642  475082 network_create.go:289] output of [docker network inspect default-k8s-diff-port-715182]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-715182 not found
	
	** /stderr **
	I1018 10:31:26.635785  475082 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:31:26.661737  475082 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:31:26.667969  475082 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:31:26.668396  475082 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:31:26.668842  475082 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019da690}
	I1018 10:31:26.668874  475082 network_create.go:124] attempt to create docker network default-k8s-diff-port-715182 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 10:31:26.668931  475082 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-715182 default-k8s-diff-port-715182
	I1018 10:31:26.754867  475082 network_create.go:108] docker network default-k8s-diff-port-715182 192.168.76.0/24 created
	I1018 10:31:26.754899  475082 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-715182" container
	I1018 10:31:26.754976  475082 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:31:26.786620  475082 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-715182 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-715182 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:31:26.809303  475082 oci.go:103] Successfully created a docker volume default-k8s-diff-port-715182
	I1018 10:31:26.809381  475082 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-715182-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-715182 --entrypoint /usr/bin/test -v default-k8s-diff-port-715182:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 10:31:27.479425  475082 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-715182
	I1018 10:31:27.479469  475082 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:27.479489  475082 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 10:31:27.479570  475082 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-715182:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 10:31:31.286351  475082 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-715182:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.806731968s)
	I1018 10:31:31.286378  475082 kic.go:203] duration metric: took 3.806886883s to extract preloaded images to volume ...
	W1018 10:31:31.286505  475082 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:31:31.286618  475082 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:31:31.249468  475717 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:31.249577  475717 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:31:31.269756  475717 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:31:31.269885  475717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:31:31.367860  475717 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-18 10:31:31.358396758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:31:31.367962  475717 docker.go:318] overlay module found
	I1018 10:31:31.371069  475717 out.go:179] * Using the docker driver based on user configuration
	I1018 10:31:31.373950  475717 start.go:305] selected driver: docker
	I1018 10:31:31.373967  475717 start.go:925] validating driver "docker" against <nil>
	I1018 10:31:31.373980  475717 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:31:31.374686  475717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:31:31.476454  475717 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:55 SystemTime:2025-10-18 10:31:31.466956134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:31:31.476627  475717 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 10:31:31.476854  475717 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:31:31.480084  475717 out.go:179] * Using Docker driver with root privileges
	I1018 10:31:31.483034  475717 cni.go:84] Creating CNI manager for ""
	I1018 10:31:31.483112  475717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:31:31.483125  475717 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 10:31:31.483206  475717 start.go:349] cluster config:
	{Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:31.486406  475717 out.go:179] * Starting "embed-certs-101897" primary control-plane node in "embed-certs-101897" cluster
	I1018 10:31:31.489155  475717 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:31:31.493022  475717 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:31:31.496006  475717 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:31.496064  475717 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:31:31.496073  475717 cache.go:58] Caching tarball of preloaded images
	I1018 10:31:31.496174  475717 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:31:31.496183  475717 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:31:31.496290  475717 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json ...
	I1018 10:31:31.496307  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json: {Name:mkd65c2fa6431ab96d83b9e3017962326c7db17d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:31.496463  475717 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:31:31.518743  475717 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:31:31.518762  475717 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:31:31.518833  475717 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:31:31.518905  475717 start.go:360] acquireMachinesLock for embed-certs-101897: {Name:mkdf4f50051bf510e5fec7789d20200884d252f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:31:31.519065  475717 start.go:364] duration metric: took 139.833µs to acquireMachinesLock for "embed-certs-101897"
	I1018 10:31:31.519142  475717 start.go:93] Provisioning new machine with config: &{Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:31:31.519236  475717 start.go:125] createHost starting for "" (driver="docker")
	I1018 10:31:31.523229  475717 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:31:31.523460  475717 start.go:159] libmachine.API.Create for "embed-certs-101897" (driver="docker")
	I1018 10:31:31.523502  475717 client.go:168] LocalClient.Create starting
	I1018 10:31:31.523580  475717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:31:31.523612  475717 main.go:141] libmachine: Decoding PEM data...
	I1018 10:31:31.523625  475717 main.go:141] libmachine: Parsing certificate...
	I1018 10:31:31.523679  475717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:31:31.523698  475717 main.go:141] libmachine: Decoding PEM data...
	I1018 10:31:31.523709  475717 main.go:141] libmachine: Parsing certificate...
	I1018 10:31:31.524055  475717 cli_runner.go:164] Run: docker network inspect embed-certs-101897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:31:31.546788  475717 cli_runner.go:211] docker network inspect embed-certs-101897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:31:31.546859  475717 network_create.go:284] running [docker network inspect embed-certs-101897] to gather additional debugging logs...
	I1018 10:31:31.546876  475717 cli_runner.go:164] Run: docker network inspect embed-certs-101897
	W1018 10:31:31.570029  475717 cli_runner.go:211] docker network inspect embed-certs-101897 returned with exit code 1
	I1018 10:31:31.570056  475717 network_create.go:287] error running [docker network inspect embed-certs-101897]: docker network inspect embed-certs-101897: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-101897 not found
	I1018 10:31:31.570071  475717 network_create.go:289] output of [docker network inspect embed-certs-101897]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-101897 not found
	
	** /stderr **
	I1018 10:31:31.570164  475717 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:31:31.588652  475717 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:31:31.588913  475717 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:31:31.589920  475717 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:31:31.590226  475717 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-788491100ff2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:73:3c:bb:41:b2} reservation:<nil>}
	I1018 10:31:31.590753  475717 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a84700}
	I1018 10:31:31.590775  475717 network_create.go:124] attempt to create docker network embed-certs-101897 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 10:31:31.590839  475717 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-101897 embed-certs-101897
	I1018 10:31:31.768059  475717 network_create.go:108] docker network embed-certs-101897 192.168.85.0/24 created
	I1018 10:31:31.768093  475717 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-101897" container
	I1018 10:31:31.768618  475717 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:31:31.800766  475717 cli_runner.go:164] Run: docker volume create embed-certs-101897 --label name.minikube.sigs.k8s.io=embed-certs-101897 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:31:31.838059  475717 oci.go:103] Successfully created a docker volume embed-certs-101897
	I1018 10:31:31.838140  475717 cli_runner.go:164] Run: docker run --rm --name embed-certs-101897-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-101897 --entrypoint /usr/bin/test -v embed-certs-101897:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 10:31:32.718381  475717 oci.go:107] Successfully prepared a docker volume embed-certs-101897
	I1018 10:31:32.718426  475717 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:32.718458  475717 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 10:31:32.718529  475717 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-101897:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 10:31:31.380407  475082 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-715182 --name default-k8s-diff-port-715182 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-715182 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-715182 --network default-k8s-diff-port-715182 --ip 192.168.76.2 --volume default-k8s-diff-port-715182:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:31:31.704433  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Running}}
	I1018 10:31:31.725568  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:31:31.762160  475082 cli_runner.go:164] Run: docker exec default-k8s-diff-port-715182 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:31:31.861956  475082 oci.go:144] the created container "default-k8s-diff-port-715182" has a running status.
	I1018 10:31:31.861997  475082 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa...
	I1018 10:31:32.305884  475082 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:31:32.338398  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:31:32.364595  475082 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:31:32.364614  475082 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-715182 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:31:32.461522  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:31:32.484677  475082 machine.go:93] provisionDockerMachine start ...
	I1018 10:31:32.484771  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:32.511905  475082 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:32.512241  475082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1018 10:31:32.512250  475082 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:31:32.517202  475082 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50940->127.0.0.1:33429: read: connection reset by peer
	I1018 10:31:35.672852  475082 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715182
	
	I1018 10:31:35.672876  475082 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-715182"
	I1018 10:31:35.672939  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:35.692400  475082 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:35.692742  475082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1018 10:31:35.692757  475082 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-715182 && echo "default-k8s-diff-port-715182" | sudo tee /etc/hostname
	I1018 10:31:35.850530  475082 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-715182
	
	I1018 10:31:35.850610  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:35.869230  475082 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:35.869556  475082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1018 10:31:35.869581  475082 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-715182' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-715182/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-715182' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:31:36.019136  475082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:31:36.019212  475082 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:31:36.019261  475082 ubuntu.go:190] setting up certificates
	I1018 10:31:36.019298  475082 provision.go:84] configureAuth start
	I1018 10:31:36.019385  475082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-715182
	I1018 10:31:36.035766  475082 provision.go:143] copyHostCerts
	I1018 10:31:36.035840  475082 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:31:36.035849  475082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:31:36.035918  475082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:31:36.036012  475082 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:31:36.036017  475082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:31:36.036044  475082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:31:36.036093  475082 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:31:36.036097  475082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:31:36.036119  475082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:31:36.036162  475082 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-715182 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-715182 localhost minikube]
	I1018 10:31:36.781008  475082 provision.go:177] copyRemoteCerts
	I1018 10:31:36.781088  475082 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:31:36.781149  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:36.798066  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:36.901090  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:31:36.921286  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 10:31:36.944057  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:31:36.965699  475082 provision.go:87] duration metric: took 946.368739ms to configureAuth
	I1018 10:31:36.965793  475082 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:31:36.965971  475082 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:36.966073  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:36.987801  475082 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:36.988118  475082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1018 10:31:36.988132  475082 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:31:37.333460  475082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:31:37.333487  475082 machine.go:96] duration metric: took 4.848791847s to provisionDockerMachine
	I1018 10:31:37.333498  475082 client.go:171] duration metric: took 10.739240115s to LocalClient.Create
	I1018 10:31:37.333511  475082 start.go:167] duration metric: took 10.739370111s to libmachine.API.Create "default-k8s-diff-port-715182"
	I1018 10:31:37.333519  475082 start.go:293] postStartSetup for "default-k8s-diff-port-715182" (driver="docker")
	I1018 10:31:37.333529  475082 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:31:37.333592  475082 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:31:37.333660  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:37.362239  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:37.496694  475082 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:31:37.501499  475082 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:31:37.501525  475082 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:31:37.501536  475082 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:31:37.501596  475082 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:31:37.501678  475082 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:31:37.501790  475082 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:31:37.523965  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:37.560369  475082 start.go:296] duration metric: took 226.835563ms for postStartSetup
	I1018 10:31:37.560886  475082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-715182
	I1018 10:31:37.620419  475082 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/config.json ...
	I1018 10:31:37.620721  475082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:31:37.620761  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:37.674735  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:37.825535  475082 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:31:37.851240  475082 start.go:128] duration metric: took 11.260812369s to createHost
	I1018 10:31:37.851275  475082 start.go:83] releasing machines lock for "default-k8s-diff-port-715182", held for 11.260934906s
	I1018 10:31:37.851369  475082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-715182
	I1018 10:31:37.904900  475082 ssh_runner.go:195] Run: cat /version.json
	I1018 10:31:37.904954  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:37.905635  475082 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:31:37.905706  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:31:37.989442  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:38.001303  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:31:38.121667  475082 ssh_runner.go:195] Run: systemctl --version
	I1018 10:31:38.129021  475082 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:31:38.176037  475082 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:31:38.268772  475082 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:31:38.268844  475082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:31:38.300684  475082 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:31:38.300704  475082 start.go:495] detecting cgroup driver to use...
	I1018 10:31:38.300735  475082 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:31:38.300782  475082 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:31:38.319982  475082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:31:38.334000  475082 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:31:38.334060  475082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:31:38.352011  475082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:31:38.371922  475082 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:31:38.550261  475082 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:31:38.672267  475082 docker.go:234] disabling docker service ...
	I1018 10:31:38.672338  475082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:31:38.695526  475082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:31:38.708798  475082 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:31:38.822884  475082 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:31:38.950389  475082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:31:38.972241  475082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:31:39.006571  475082 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:31:39.006640  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.021120  475082 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:31:39.021280  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.039937  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.051677  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.065341  475082 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:31:39.080011  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.092689  475082 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.117339  475082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:39.128189  475082 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:31:39.139030  475082 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:31:39.147159  475082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:39.311832  475082 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:31:39.468970  475082 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:31:39.469063  475082 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:31:39.474150  475082 start.go:563] Will wait 60s for crictl version
	I1018 10:31:39.474228  475082 ssh_runner.go:195] Run: which crictl
	I1018 10:31:39.478267  475082 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:31:39.509156  475082 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:31:39.509253  475082 ssh_runner.go:195] Run: crio --version
	I1018 10:31:39.549865  475082 ssh_runner.go:195] Run: crio --version
	I1018 10:31:39.591010  475082 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:31:36.915643  475717 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-101897:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.197066874s)
	I1018 10:31:36.915675  475717 kic.go:203] duration metric: took 4.19722366s to extract preloaded images to volume ...
	W1018 10:31:36.915798  475717 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:31:36.915935  475717 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:31:36.998857  475717 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-101897 --name embed-certs-101897 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-101897 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-101897 --network embed-certs-101897 --ip 192.168.85.2 --volume embed-certs-101897:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:31:37.374988  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Running}}
	I1018 10:31:37.399186  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:31:37.423927  475717 cli_runner.go:164] Run: docker exec embed-certs-101897 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:31:37.474578  475717 oci.go:144] the created container "embed-certs-101897" has a running status.
	I1018 10:31:37.474615  475717 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa...
	I1018 10:31:38.409946  475717 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:31:38.437967  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:31:38.465685  475717 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:31:38.465705  475717 kic_runner.go:114] Args: [docker exec --privileged embed-certs-101897 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:31:38.535328  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:31:38.555277  475717 machine.go:93] provisionDockerMachine start ...
	I1018 10:31:38.555376  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:38.575204  475717 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:38.575555  475717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1018 10:31:38.575572  475717 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:31:38.576167  475717 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45284->127.0.0.1:33434: read: connection reset by peer
	I1018 10:31:39.593927  475082 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-715182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:31:39.610567  475082 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:31:39.614726  475082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:31:39.625311  475082 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-715182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:31:39.625429  475082 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:39.625501  475082 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:39.668765  475082 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:39.668787  475082 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:31:39.668843  475082 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:39.695643  475082 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:39.695667  475082 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:31:39.695675  475082 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1018 10:31:39.695769  475082 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-715182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:31:39.695854  475082 ssh_runner.go:195] Run: crio config
	I1018 10:31:39.765831  475082 cni.go:84] Creating CNI manager for ""
	I1018 10:31:39.765854  475082 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:31:39.765868  475082 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:31:39.765891  475082 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715182 NodeName:default-k8s-diff-port-715182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:31:39.766021  475082 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715182"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:31:39.766095  475082 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:31:39.774440  475082 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:31:39.774511  475082 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:31:39.781972  475082 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 10:31:39.794732  475082 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:31:39.807748  475082 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 10:31:39.820254  475082 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:31:39.823636  475082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:31:39.833113  475082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:39.940666  475082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:31:39.958156  475082 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182 for IP: 192.168.76.2
	I1018 10:31:39.958230  475082 certs.go:195] generating shared ca certs ...
	I1018 10:31:39.958260  475082 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:39.958431  475082 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:31:39.958506  475082 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:31:39.958537  475082 certs.go:257] generating profile certs ...
	I1018 10:31:39.958640  475082 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.key
	I1018 10:31:39.958681  475082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt with IP's: []
	I1018 10:31:40.624187  475082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt ...
	I1018 10:31:40.624223  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt: {Name:mkaf229aa28b7977eadb932ec5254ad5394152f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:40.624424  475082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.key ...
	I1018 10:31:40.624438  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.key: {Name:mk7fc6c9d595be8b0b890cddf15b543d6402cfeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:40.624543  475082 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d
	I1018 10:31:40.624564  475082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt.7b193c3d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 10:31:41.067211  475082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt.7b193c3d ...
	I1018 10:31:41.067253  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt.7b193c3d: {Name:mke2be9f248a0847223ebc620a34ed95ff627493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:41.067442  475082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d ...
	I1018 10:31:41.067459  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d: {Name:mk96f1bb534f441740de90d6e4e4637b836bbfcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:41.067543  475082 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt.7b193c3d -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt
	I1018 10:31:41.067624  475082 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key
	I1018 10:31:41.067682  475082 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key
	I1018 10:31:41.067703  475082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt with IP's: []
	I1018 10:31:41.729377  475717 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-101897
	
	I1018 10:31:41.729404  475717 ubuntu.go:182] provisioning hostname "embed-certs-101897"
	I1018 10:31:41.729466  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:41.756507  475717 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:41.756838  475717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1018 10:31:41.756854  475717 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-101897 && echo "embed-certs-101897" | sudo tee /etc/hostname
	I1018 10:31:41.929215  475717 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-101897
	
	I1018 10:31:41.929332  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:41.959113  475717 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:41.959529  475717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1018 10:31:41.959555  475717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-101897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-101897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-101897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:31:42.127062  475717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:31:42.127101  475717 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:31:42.127142  475717 ubuntu.go:190] setting up certificates
	I1018 10:31:42.127155  475717 provision.go:84] configureAuth start
	I1018 10:31:42.127237  475717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:31:42.158827  475717 provision.go:143] copyHostCerts
	I1018 10:31:42.158900  475717 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:31:42.158910  475717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:31:42.158997  475717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:31:42.159104  475717 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:31:42.159110  475717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:31:42.159137  475717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:31:42.159191  475717 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:31:42.159197  475717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:31:42.159220  475717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:31:42.159337  475717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.embed-certs-101897 san=[127.0.0.1 192.168.85.2 embed-certs-101897 localhost minikube]
	I1018 10:31:42.645265  475717 provision.go:177] copyRemoteCerts
	I1018 10:31:42.645330  475717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:31:42.645370  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:42.665170  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:42.778373  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:31:42.800764  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 10:31:42.826209  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 10:31:42.855572  475717 provision.go:87] duration metric: took 728.388861ms to configureAuth
	I1018 10:31:42.855643  475717 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:31:42.855872  475717 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:31:42.856065  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:42.876616  475717 main.go:141] libmachine: Using SSH client type: native
	I1018 10:31:42.876918  475717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1018 10:31:42.876934  475717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:31:43.191973  475717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:31:43.192046  475717 machine.go:96] duration metric: took 4.636745173s to provisionDockerMachine
	I1018 10:31:43.192071  475717 client.go:171] duration metric: took 11.668562362s to LocalClient.Create
	I1018 10:31:43.192117  475717 start.go:167] duration metric: took 11.668642363s to libmachine.API.Create "embed-certs-101897"
	I1018 10:31:43.192142  475717 start.go:293] postStartSetup for "embed-certs-101897" (driver="docker")
	I1018 10:31:43.192164  475717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:31:43.192255  475717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:31:43.192349  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:43.213097  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:43.321076  475717 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:31:43.325098  475717 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:31:43.325128  475717 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:31:43.325139  475717 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:31:43.325214  475717 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:31:43.325300  475717 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:31:43.325406  475717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:31:43.333002  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:43.350863  475717 start.go:296] duration metric: took 158.693537ms for postStartSetup
	I1018 10:31:43.351275  475717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:31:43.369618  475717 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json ...
	I1018 10:31:43.369890  475717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:31:43.369941  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:43.386784  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:43.490614  475717 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:31:43.495823  475717 start.go:128] duration metric: took 11.976570358s to createHost
	I1018 10:31:43.495847  475717 start.go:83] releasing machines lock for "embed-certs-101897", held for 11.97676898s
	I1018 10:31:43.495914  475717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:31:43.514818  475717 ssh_runner.go:195] Run: cat /version.json
	I1018 10:31:43.514870  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:43.515097  475717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:31:43.515176  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:31:43.549411  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:43.555836  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:31:43.669622  475717 ssh_runner.go:195] Run: systemctl --version
	I1018 10:31:43.768553  475717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:31:43.820670  475717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:31:43.826086  475717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:31:43.826168  475717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:31:43.857472  475717 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:31:43.857496  475717 start.go:495] detecting cgroup driver to use...
	I1018 10:31:43.857528  475717 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:31:43.857581  475717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:31:43.880557  475717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:31:43.895247  475717 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:31:43.895313  475717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:31:43.913474  475717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:31:43.934213  475717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:31:44.084100  475717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:31:44.251095  475717 docker.go:234] disabling docker service ...
	I1018 10:31:44.251234  475717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:31:44.279796  475717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:31:44.294667  475717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:31:44.445843  475717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:31:44.628940  475717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:31:44.643452  475717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:31:44.657675  475717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:31:44.657748  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.666274  475717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:31:44.666357  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.675135  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.683798  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.693093  475717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:31:44.702089  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.712075  475717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.727022  475717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:31:44.736803  475717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:31:44.745992  475717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:31:44.755470  475717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:44.896858  475717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:31:45.052123  475717 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:31:45.052219  475717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:31:45.073934  475717 start.go:563] Will wait 60s for crictl version
	I1018 10:31:45.074011  475717 ssh_runner.go:195] Run: which crictl
	I1018 10:31:45.079522  475717 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:31:45.116726  475717 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:31:45.116838  475717 ssh_runner.go:195] Run: crio --version
	I1018 10:31:45.181915  475717 ssh_runner.go:195] Run: crio --version
	I1018 10:31:45.243791  475717 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:31:45.246766  475717 cli_runner.go:164] Run: docker network inspect embed-certs-101897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:31:45.270492  475717 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:31:45.275804  475717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:31:45.290285  475717 kubeadm.go:883] updating cluster {Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:31:45.290431  475717 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:31:45.290507  475717 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:45.341639  475717 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:45.341668  475717 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:31:45.341732  475717 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:31:45.386304  475717 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:31:45.386366  475717 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:31:45.386376  475717 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:31:45.386594  475717 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-101897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:31:45.386922  475717 ssh_runner.go:195] Run: crio config
	I1018 10:31:45.482822  475717 cni.go:84] Creating CNI manager for ""
	I1018 10:31:45.482847  475717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:31:45.482861  475717 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:31:45.482884  475717 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-101897 NodeName:embed-certs-101897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:31:45.483047  475717 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-101897"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:31:45.483128  475717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:31:45.494906  475717 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:31:45.494988  475717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:31:45.505670  475717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 10:31:45.524397  475717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:31:45.539329  475717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 10:31:45.553905  475717 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:31:45.558781  475717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:31:45.569258  475717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:31:45.703066  475717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:31:45.720264  475717 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897 for IP: 192.168.85.2
	I1018 10:31:45.720287  475717 certs.go:195] generating shared ca certs ...
	I1018 10:31:45.720320  475717 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:45.720501  475717 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:31:45.720561  475717 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:31:45.720574  475717 certs.go:257] generating profile certs ...
	I1018 10:31:45.720638  475717 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.key
	I1018 10:31:45.720653  475717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.crt with IP's: []
	I1018 10:31:45.822491  475717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.crt ...
	I1018 10:31:45.822525  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.crt: {Name:mke9cef39cf3c9ed5958ddb0b28743026da2d659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:45.822716  475717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.key ...
	I1018 10:31:45.822732  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.key: {Name:mka7b069975e81726e52c31299137422d3fa2629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:45.822814  475717 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4
	I1018 10:31:45.822833  475717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt.cf2721a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 10:31:42.017884  475082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt ...
	I1018 10:31:42.017927  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt: {Name:mkf221e8f6c1d33743f02c6335617dce0ab9b1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:42.018129  475082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key ...
	I1018 10:31:42.018148  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key: {Name:mk8a8c8bbc1f2a62b28ec878ae60c144682cc40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:42.018347  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:31:42.018404  475082 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:31:42.018419  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:31:42.018447  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:31:42.018476  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:31:42.018503  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:31:42.018555  475082 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:42.019194  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:31:42.045162  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:31:42.073900  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:31:42.102512  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:31:42.126229  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 10:31:42.156729  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:31:42.185112  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:31:42.221953  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:31:42.269240  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:31:42.307139  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:31:42.330964  475082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:31:42.349952  475082 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:31:42.363506  475082 ssh_runner.go:195] Run: openssl version
	I1018 10:31:42.370074  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:31:42.378750  475082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:31:42.382826  475082 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:31:42.382905  475082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:31:42.424391  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:31:42.433399  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:31:42.441406  475082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:42.445680  475082 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:42.445740  475082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:42.487887  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:31:42.496347  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:31:42.504551  475082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:31:42.508758  475082 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:31:42.508821  475082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:31:42.550874  475082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:31:42.559476  475082 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:31:42.564191  475082 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:31:42.564245  475082 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-715182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:42.564317  475082 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:31:42.564375  475082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:31:42.604149  475082 cri.go:89] found id: ""
	I1018 10:31:42.604297  475082 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:31:42.617019  475082 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:31:42.625838  475082 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:31:42.625912  475082 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:31:42.637073  475082 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:31:42.637090  475082 kubeadm.go:157] found existing configuration files:
	
	I1018 10:31:42.637146  475082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 10:31:42.646512  475082 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:31:42.646571  475082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:31:42.655146  475082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 10:31:42.664929  475082 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:31:42.664994  475082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:31:42.672993  475082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 10:31:42.681521  475082 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:31:42.681580  475082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:31:42.689396  475082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 10:31:42.697386  475082 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:31:42.697458  475082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:31:42.707691  475082 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:31:42.781658  475082 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 10:31:42.781938  475082 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:31:42.866408  475082 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:31:46.276517  475717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt.cf2721a4 ...
	I1018 10:31:46.276590  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt.cf2721a4: {Name:mk99f9dc25d745313d2c2dec6be440a6d27aebbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:46.276834  475717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4 ...
	I1018 10:31:46.276872  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4: {Name:mk6ac9eb27b775bc48282205d6d25f6ddb5fe0f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:46.277022  475717 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt.cf2721a4 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt
	I1018 10:31:46.277154  475717 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key
	I1018 10:31:46.277275  475717 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key
	I1018 10:31:46.277314  475717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt with IP's: []
	I1018 10:31:46.549386  475717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt ...
	I1018 10:31:46.549413  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt: {Name:mk46b0e0b0944a2fffa37e66f4ec5cc0467cacda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:46.549586  475717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key ...
	I1018 10:31:46.549595  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key: {Name:mk51360afea0ae5803d08bb52281db45b37f4bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:31:46.549764  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:31:46.549799  475717 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:31:46.549812  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:31:46.549836  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:31:46.549858  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:31:46.549878  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:31:46.549933  475717 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:31:46.550502  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:31:46.571221  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:31:46.589047  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:31:46.615192  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:31:46.634147  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 10:31:46.656984  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:31:46.678077  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:31:46.728159  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:31:46.765109  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:31:46.786112  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:31:46.809924  475717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:31:46.832797  475717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:31:46.848216  475717 ssh_runner.go:195] Run: openssl version
	I1018 10:31:46.854733  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:31:46.863781  475717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:46.867904  475717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:46.868018  475717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:31:46.913854  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:31:46.927037  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:31:46.937416  475717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:31:46.942258  475717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:31:46.942377  475717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:31:46.985319  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:31:46.998766  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:31:47.013265  475717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:31:47.025517  475717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:31:47.025588  475717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:31:47.068051  475717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:31:47.076398  475717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:31:47.080192  475717 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:31:47.080244  475717 kubeadm.go:400] StartCluster: {Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:31:47.080317  475717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:31:47.080371  475717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:31:47.115364  475717 cri.go:89] found id: ""
	I1018 10:31:47.115441  475717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:31:47.129304  475717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:31:47.138179  475717 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:31:47.138243  475717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:31:47.149031  475717 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:31:47.149051  475717 kubeadm.go:157] found existing configuration files:
	
	I1018 10:31:47.149104  475717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 10:31:47.158366  475717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:31:47.158430  475717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:31:47.166328  475717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 10:31:47.175030  475717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:31:47.175095  475717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:31:47.183345  475717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 10:31:47.191919  475717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:31:47.191982  475717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:31:47.200188  475717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 10:31:47.208958  475717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:31:47.209022  475717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:31:47.217446  475717 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:31:47.265409  475717 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 10:31:47.265759  475717 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:31:47.307239  475717 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:31:47.307324  475717 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:31:47.307366  475717 kubeadm.go:318] OS: Linux
	I1018 10:31:47.307418  475717 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:31:47.307473  475717 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:31:47.307525  475717 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:31:47.307580  475717 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:31:47.307634  475717 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:31:47.307688  475717 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:31:47.307740  475717 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:31:47.307793  475717 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:31:47.307846  475717 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:31:47.413625  475717 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:31:47.413746  475717 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:31:47.413849  475717 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 10:31:47.457606  475717 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:31:47.463402  475717 out.go:252]   - Generating certificates and keys ...
	I1018 10:31:47.463503  475717 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:31:47.463580  475717 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:31:48.582996  475717 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:31:48.969562  475717 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:31:49.426304  475717 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:31:49.847458  475717 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:31:50.758192  475717 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:31:50.758542  475717 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-101897 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:31:51.765102  475717 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:31:51.765337  475717 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-101897 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:31:52.112713  475717 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:31:52.185474  475717 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:31:53.381558  475717 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:31:53.381640  475717 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:31:53.889550  475717 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:31:54.077561  475717 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 10:31:54.319757  475717 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:31:54.685159  475717 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:31:55.197590  475717 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:31:55.200398  475717 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:31:55.213572  475717 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:31:55.217316  475717 out.go:252]   - Booting up control plane ...
	I1018 10:31:55.217443  475717 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:31:55.217525  475717 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:31:55.217604  475717 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:31:55.262640  475717 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:31:55.263007  475717 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 10:31:55.276000  475717 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 10:31:55.287773  475717 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:31:55.287853  475717 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:31:55.498375  475717 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 10:31:55.498506  475717 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 10:31:58.001588  475717 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.500633833s
	I1018 10:31:58.002387  475717 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 10:31:58.002630  475717 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 10:31:58.002729  475717 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 10:31:58.002812  475717 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 10:32:04.865820  475082 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 10:32:04.865878  475082 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:32:04.865969  475082 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:32:04.866027  475082 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:32:04.866062  475082 kubeadm.go:318] OS: Linux
	I1018 10:32:04.866109  475082 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:32:04.866159  475082 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:32:04.866209  475082 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:32:04.866259  475082 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:32:04.866310  475082 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:32:04.866360  475082 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:32:04.866408  475082 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:32:04.866458  475082 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:32:04.866507  475082 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:32:04.866581  475082 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:32:04.866679  475082 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:32:04.866772  475082 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 10:32:04.866837  475082 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:32:04.869875  475082 out.go:252]   - Generating certificates and keys ...
	I1018 10:32:04.869978  475082 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:32:04.870047  475082 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:32:04.870117  475082 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:32:04.870177  475082 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:32:04.870245  475082 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:32:04.870298  475082 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:32:04.870355  475082 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:32:04.870493  475082 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-715182 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:32:04.870550  475082 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:32:04.870686  475082 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-715182 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:32:04.870754  475082 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:32:04.870831  475082 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:32:04.870879  475082 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:32:04.870938  475082 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:32:04.870990  475082 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:32:04.871050  475082 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 10:32:04.871109  475082 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:32:04.871175  475082 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:32:04.871233  475082 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:32:04.871318  475082 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:32:04.871387  475082 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:32:04.874425  475082 out.go:252]   - Booting up control plane ...
	I1018 10:32:04.874604  475082 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:32:04.874740  475082 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:32:04.874862  475082 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:32:04.875043  475082 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:32:04.875202  475082 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 10:32:04.875365  475082 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 10:32:04.875462  475082 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:32:04.875505  475082 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:32:04.875649  475082 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 10:32:04.875765  475082 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 10:32:04.875834  475082 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001803384s
	I1018 10:32:04.875936  475082 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 10:32:04.876026  475082 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1018 10:32:04.876124  475082 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 10:32:04.876211  475082 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 10:32:04.876295  475082 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.415755357s
	I1018 10:32:04.876369  475082 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.394818898s
	I1018 10:32:04.876454  475082 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.002194617s
	I1018 10:32:04.876572  475082 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:32:04.876712  475082 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:32:04.876785  475082 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:32:04.877002  475082 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-715182 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:32:04.877064  475082 kubeadm.go:318] [bootstrap-token] Using token: 1xbay4.ra29h3fawbyrwawj
	I1018 10:32:04.880133  475082 out.go:252]   - Configuring RBAC rules ...
	I1018 10:32:04.880269  475082 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:32:04.880397  475082 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:32:04.880555  475082 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:32:04.880697  475082 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:32:04.880825  475082 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:32:04.880920  475082 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:32:04.881049  475082 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:32:04.881098  475082 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:32:04.881149  475082 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:32:04.881154  475082 kubeadm.go:318] 
	I1018 10:32:04.881321  475082 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:32:04.881327  475082 kubeadm.go:318] 
	I1018 10:32:04.881413  475082 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:32:04.881417  475082 kubeadm.go:318] 
	I1018 10:32:04.881445  475082 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:32:04.881510  475082 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:32:04.881567  475082 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:32:04.881572  475082 kubeadm.go:318] 
	I1018 10:32:04.881632  475082 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:32:04.881636  475082 kubeadm.go:318] 
	I1018 10:32:04.881689  475082 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:32:04.881694  475082 kubeadm.go:318] 
	I1018 10:32:04.881752  475082 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:32:04.881835  475082 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:32:04.881911  475082 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:32:04.881915  475082 kubeadm.go:318] 
	I1018 10:32:04.882009  475082 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:32:04.882095  475082 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:32:04.882100  475082 kubeadm.go:318] 
	I1018 10:32:04.882194  475082 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 1xbay4.ra29h3fawbyrwawj \
	I1018 10:32:04.882309  475082 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:32:04.882332  475082 kubeadm.go:318] 	--control-plane 
	I1018 10:32:04.882337  475082 kubeadm.go:318] 
	I1018 10:32:04.882439  475082 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:32:04.882444  475082 kubeadm.go:318] 
	I1018 10:32:04.882535  475082 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 1xbay4.ra29h3fawbyrwawj \
	I1018 10:32:04.882664  475082 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:32:04.882673  475082 cni.go:84] Creating CNI manager for ""
	I1018 10:32:04.882680  475082 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:32:04.887196  475082 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:32:03.635817  475717 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.632797732s
	I1018 10:32:05.789740  475717 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.786604719s
	I1018 10:32:04.890279  475082 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:32:04.895282  475082 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 10:32:04.895304  475082 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:32:04.924040  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:32:05.611442  475082 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:32:05.611579  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:05.611663  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-715182 minikube.k8s.io/updated_at=2025_10_18T10_32_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=default-k8s-diff-port-715182 minikube.k8s.io/primary=true
	I1018 10:32:06.034410  475082 ops.go:34] apiserver oom_adj: -16
	I1018 10:32:06.034522  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:07.504260  475717 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.501509033s
	I1018 10:32:07.523967  475717 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:32:07.549625  475717 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:32:07.566172  475717 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:32:07.566426  475717 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-101897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:32:07.593126  475717 kubeadm.go:318] [bootstrap-token] Using token: q941ou.y2vfl8rz7u2y7kaa
	I1018 10:32:07.596146  475717 out.go:252]   - Configuring RBAC rules ...
	I1018 10:32:07.596336  475717 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:32:07.605004  475717 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:32:07.614253  475717 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:32:07.622617  475717 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:32:07.629829  475717 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:32:07.645014  475717 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:32:07.919086  475717 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:32:08.401299  475717 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:32:08.912234  475717 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:32:08.913524  475717 kubeadm.go:318] 
	I1018 10:32:08.913607  475717 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:32:08.913618  475717 kubeadm.go:318] 
	I1018 10:32:08.913700  475717 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:32:08.913711  475717 kubeadm.go:318] 
	I1018 10:32:08.913738  475717 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:32:08.913804  475717 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:32:08.913863  475717 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:32:08.913872  475717 kubeadm.go:318] 
	I1018 10:32:08.913929  475717 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:32:08.913938  475717 kubeadm.go:318] 
	I1018 10:32:08.913988  475717 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:32:08.913996  475717 kubeadm.go:318] 
	I1018 10:32:08.914052  475717 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:32:08.914135  475717 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:32:08.914212  475717 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:32:08.914220  475717 kubeadm.go:318] 
	I1018 10:32:08.914309  475717 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:32:08.914393  475717 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:32:08.914401  475717 kubeadm.go:318] 
	I1018 10:32:08.914500  475717 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token q941ou.y2vfl8rz7u2y7kaa \
	I1018 10:32:08.914612  475717 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:32:08.914637  475717 kubeadm.go:318] 	--control-plane 
	I1018 10:32:08.914646  475717 kubeadm.go:318] 
	I1018 10:32:08.914735  475717 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:32:08.914743  475717 kubeadm.go:318] 
	I1018 10:32:08.914829  475717 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token q941ou.y2vfl8rz7u2y7kaa \
	I1018 10:32:08.914939  475717 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:32:08.917607  475717 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 10:32:08.917857  475717 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:32:08.917973  475717 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:32:08.918051  475717 cni.go:84] Creating CNI manager for ""
	I1018 10:32:08.918086  475717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:32:08.923209  475717 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:32:06.534653  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:07.034691  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:07.535122  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:08.035460  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:08.535549  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:09.034692  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:09.534702  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.035318  475082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.276417  475082 kubeadm.go:1113] duration metric: took 4.664883185s to wait for elevateKubeSystemPrivileges
	I1018 10:32:10.276444  475082 kubeadm.go:402] duration metric: took 27.712204978s to StartCluster
	I1018 10:32:10.276461  475082 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:32:10.276522  475082 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:32:10.277281  475082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:32:10.277478  475082 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:32:10.277613  475082 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:32:10.277854  475082 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:32:10.277833  475082 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:32:10.277915  475082 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715182"
	I1018 10:32:10.277925  475082 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715182"
	I1018 10:32:10.277938  475082 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715182"
	I1018 10:32:10.277946  475082 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-715182"
	I1018 10:32:10.277970  475082 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:32:10.278243  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:32:10.278402  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:32:10.280827  475082 out.go:179] * Verifying Kubernetes components...
	I1018 10:32:10.284313  475082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:32:10.325879  475082 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-715182"
	I1018 10:32:10.325921  475082 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:32:10.326371  475082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:32:10.333912  475082 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:32:08.927059  475717 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:32:08.931638  475717 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 10:32:08.931659  475717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:32:08.958861  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:32:09.372119  475717 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:32:09.372245  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:09.372310  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-101897 minikube.k8s.io/updated_at=2025_10_18T10_32_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=embed-certs-101897 minikube.k8s.io/primary=true
	I1018 10:32:09.554209  475717 ops.go:34] apiserver oom_adj: -16
	I1018 10:32:09.554338  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.055097  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.555311  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:11.054355  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:10.337100  475082 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:32:10.337122  475082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:32:10.337205  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:32:10.353175  475082 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:32:10.353234  475082 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:32:10.353294  475082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:32:10.387134  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:32:10.395094  475082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:32:10.874468  475082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:32:10.891006  475082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:32:10.906078  475082 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:32:10.906278  475082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:32:11.774184  475082 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715182" to be "Ready" ...
	I1018 10:32:11.774474  475082 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 10:32:11.820953  475082 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 10:32:11.555095  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:12.054471  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:12.554856  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:13.054532  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:13.554456  475717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:32:13.661094  475717 kubeadm.go:1113] duration metric: took 4.288891615s to wait for elevateKubeSystemPrivileges
	I1018 10:32:13.661128  475717 kubeadm.go:402] duration metric: took 26.58088751s to StartCluster
	I1018 10:32:13.661146  475717 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:32:13.661250  475717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:32:13.662633  475717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:32:13.662878  475717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:32:13.662882  475717 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:32:13.663166  475717 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:32:13.663201  475717 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:32:13.663263  475717 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-101897"
	I1018 10:32:13.663277  475717 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-101897"
	I1018 10:32:13.663298  475717 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:32:13.663760  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:32:13.664120  475717 addons.go:69] Setting default-storageclass=true in profile "embed-certs-101897"
	I1018 10:32:13.664145  475717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-101897"
	I1018 10:32:13.664444  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:32:13.666672  475717 out.go:179] * Verifying Kubernetes components...
	I1018 10:32:13.673453  475717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:32:13.696580  475717 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:32:13.699403  475717 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:32:13.699426  475717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:32:13.699499  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:32:13.727031  475717 addons.go:238] Setting addon default-storageclass=true in "embed-certs-101897"
	I1018 10:32:13.727076  475717 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:32:13.727513  475717 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:32:13.745295  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:32:13.762815  475717 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:32:13.762838  475717 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:32:13.762917  475717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:32:13.792320  475717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:32:14.025504  475717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:32:14.112087  475717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:32:14.112233  475717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:32:14.147653  475717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:32:14.645378  475717 node_ready.go:35] waiting up to 6m0s for node "embed-certs-101897" to be "Ready" ...
	I1018 10:32:14.645642  475717 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 10:32:14.835445  475717 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 10:32:14.838305  475717 addons.go:514] duration metric: took 1.175073755s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1018 10:32:15.152311  475717 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-101897" context rescaled to 1 replicas
	I1018 10:32:11.823687  475082 addons.go:514] duration metric: took 1.545847877s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 10:32:12.277598  475082 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-715182" context rescaled to 1 replicas
	W1018 10:32:13.779972  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:16.278128  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:16.649003  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:18.649496  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:18.776881  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:20.777766  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:21.148542  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:23.648242  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:25.648716  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:22.778682  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:25.277820  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:28.148212  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:30.148663  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:27.277882  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:29.777875  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:32.149130  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:34.649291  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:32.277109  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:34.778732  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:37.150232  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:39.648902  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:37.277135  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:39.777346  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:41.648943  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:44.148321  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:41.778175  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:43.778656  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:46.277951  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:46.148807  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:48.648633  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:48.778112  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	W1018 10:32:50.778274  475082 node_ready.go:57] node "default-k8s-diff-port-715182" has "Ready":"False" status (will retry)
	I1018 10:32:51.277421  475082 node_ready.go:49] node "default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:51.277455  475082 node_ready.go:38] duration metric: took 39.503237928s for node "default-k8s-diff-port-715182" to be "Ready" ...
	I1018 10:32:51.277469  475082 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:32:51.277524  475082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:32:51.291846  475082 api_server.go:72] duration metric: took 41.014338044s to wait for apiserver process to appear ...
	I1018 10:32:51.291870  475082 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:32:51.291889  475082 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1018 10:32:51.302386  475082 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1018 10:32:51.303749  475082 api_server.go:141] control plane version: v1.34.1
	I1018 10:32:51.303777  475082 api_server.go:131] duration metric: took 11.899909ms to wait for apiserver health ...
	I1018 10:32:51.303787  475082 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:32:51.307620  475082 system_pods.go:59] 8 kube-system pods found
	I1018 10:32:51.307654  475082 system_pods.go:61] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:51.307662  475082 system_pods.go:61] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:51.307668  475082 system_pods.go:61] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:51.307677  475082 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:51.307682  475082 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:51.307695  475082 system_pods.go:61] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:51.307700  475082 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:51.307706  475082 system_pods.go:61] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:51.307719  475082 system_pods.go:74] duration metric: took 3.92612ms to wait for pod list to return data ...
	I1018 10:32:51.307728  475082 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:32:51.311778  475082 default_sa.go:45] found service account: "default"
	I1018 10:32:51.311803  475082 default_sa.go:55] duration metric: took 4.064369ms for default service account to be created ...
	I1018 10:32:51.311812  475082 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:32:51.315953  475082 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:51.316027  475082 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:51.316044  475082 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:51.316053  475082 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:51.316058  475082 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:51.316063  475082 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:51.316068  475082 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:51.316072  475082 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:51.316095  475082 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:51.316136  475082 retry.go:31] will retry after 250.86488ms: missing components: kube-dns
	I1018 10:32:51.571188  475082 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:51.571224  475082 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:51.571231  475082 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:51.571239  475082 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:51.571246  475082 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:51.571250  475082 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:51.571255  475082 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:51.571259  475082 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:51.571265  475082 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:51.571286  475082 retry.go:31] will retry after 243.388244ms: missing components: kube-dns
	I1018 10:32:51.820258  475082 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:51.820289  475082 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:51.820296  475082 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:51.820325  475082 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:51.820338  475082 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:51.820344  475082 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:51.820358  475082 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:51.820367  475082 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:51.820381  475082 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:51.820412  475082 retry.go:31] will retry after 473.612147ms: missing components: kube-dns
	I1018 10:32:52.298784  475082 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:52.298816  475082 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Running
	I1018 10:32:52.298823  475082 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running
	I1018 10:32:52.298830  475082 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:32:52.298856  475082 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running
	I1018 10:32:52.298872  475082 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running
	I1018 10:32:52.298877  475082 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:32:52.298882  475082 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running
	I1018 10:32:52.298886  475082 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Running
	I1018 10:32:52.298894  475082 system_pods.go:126] duration metric: took 987.07601ms to wait for k8s-apps to be running ...
	I1018 10:32:52.298908  475082 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:32:52.298971  475082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:32:52.313737  475082 system_svc.go:56] duration metric: took 14.819996ms WaitForService to wait for kubelet
	I1018 10:32:52.313768  475082 kubeadm.go:586] duration metric: took 42.036265766s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:32:52.313786  475082 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:32:52.316609  475082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:32:52.316641  475082 node_conditions.go:123] node cpu capacity is 2
	I1018 10:32:52.316655  475082 node_conditions.go:105] duration metric: took 2.863159ms to run NodePressure ...
	I1018 10:32:52.316668  475082 start.go:241] waiting for startup goroutines ...
	I1018 10:32:52.316676  475082 start.go:246] waiting for cluster config update ...
	I1018 10:32:52.316686  475082 start.go:255] writing updated cluster config ...
	I1018 10:32:52.316982  475082 ssh_runner.go:195] Run: rm -f paused
	I1018 10:32:52.320682  475082 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:32:52.324844  475082 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c2sb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.329812  475082 pod_ready.go:94] pod "coredns-66bc5c9577-c2sb5" is "Ready"
	I1018 10:32:52.329843  475082 pod_ready.go:86] duration metric: took 4.970314ms for pod "coredns-66bc5c9577-c2sb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.333026  475082 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.337800  475082 pod_ready.go:94] pod "etcd-default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:52.337827  475082 pod_ready.go:86] duration metric: took 4.773505ms for pod "etcd-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.340220  475082 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.345148  475082 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:52.345179  475082 pod_ready.go:86] duration metric: took 4.935073ms for pod "kube-apiserver-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.347627  475082 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.725323  475082 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:52.725353  475082 pod_ready.go:86] duration metric: took 377.699943ms for pod "kube-controller-manager-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:52.925643  475082 pod_ready.go:83] waiting for pod "kube-proxy-5whrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:53.324803  475082 pod_ready.go:94] pod "kube-proxy-5whrp" is "Ready"
	I1018 10:32:53.324879  475082 pod_ready.go:86] duration metric: took 399.209488ms for pod "kube-proxy-5whrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:53.525288  475082 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:53.925633  475082 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-715182" is "Ready"
	I1018 10:32:53.925659  475082 pod_ready.go:86] duration metric: took 400.345583ms for pod "kube-scheduler-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:53.925673  475082 pod_ready.go:40] duration metric: took 1.604959783s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:32:53.991639  475082 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:32:53.995011  475082 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-715182" cluster and "default" namespace by default
	W1018 10:32:51.148947  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	W1018 10:32:53.648942  475717 node_ready.go:57] node "embed-certs-101897" has "Ready":"False" status (will retry)
	I1018 10:32:55.149307  475717 node_ready.go:49] node "embed-certs-101897" is "Ready"
	I1018 10:32:55.149340  475717 node_ready.go:38] duration metric: took 40.503928468s for node "embed-certs-101897" to be "Ready" ...
	I1018 10:32:55.149353  475717 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:32:55.149414  475717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:32:55.162144  475717 api_server.go:72] duration metric: took 41.499232613s to wait for apiserver process to appear ...
	I1018 10:32:55.162168  475717 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:32:55.162187  475717 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:32:55.170613  475717 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 10:32:55.171641  475717 api_server.go:141] control plane version: v1.34.1
	I1018 10:32:55.171664  475717 api_server.go:131] duration metric: took 9.489597ms to wait for apiserver health ...
	I1018 10:32:55.171673  475717 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:32:55.175040  475717 system_pods.go:59] 8 kube-system pods found
	I1018 10:32:55.175080  475717 system_pods.go:61] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:55.175088  475717 system_pods.go:61] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running
	I1018 10:32:55.175094  475717 system_pods.go:61] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:32:55.175099  475717 system_pods.go:61] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running
	I1018 10:32:55.175104  475717 system_pods.go:61] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running
	I1018 10:32:55.175109  475717 system_pods.go:61] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:32:55.175115  475717 system_pods.go:61] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running
	I1018 10:32:55.175121  475717 system_pods.go:61] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:55.175133  475717 system_pods.go:74] duration metric: took 3.453056ms to wait for pod list to return data ...
	I1018 10:32:55.175144  475717 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:32:55.177684  475717 default_sa.go:45] found service account: "default"
	I1018 10:32:55.177709  475717 default_sa.go:55] duration metric: took 2.55781ms for default service account to be created ...
	I1018 10:32:55.177718  475717 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:32:55.180639  475717 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:55.180676  475717 system_pods.go:89] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:55.180685  475717 system_pods.go:89] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running
	I1018 10:32:55.180719  475717 system_pods.go:89] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:32:55.180733  475717 system_pods.go:89] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running
	I1018 10:32:55.180739  475717 system_pods.go:89] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running
	I1018 10:32:55.180743  475717 system_pods.go:89] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:32:55.180749  475717 system_pods.go:89] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running
	I1018 10:32:55.180758  475717 system_pods.go:89] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:55.180791  475717 retry.go:31] will retry after 290.015189ms: missing components: kube-dns
	I1018 10:32:55.475880  475717 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:55.475972  475717 system_pods.go:89] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:55.475993  475717 system_pods.go:89] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running
	I1018 10:32:55.476013  475717 system_pods.go:89] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:32:55.476035  475717 system_pods.go:89] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running
	I1018 10:32:55.476061  475717 system_pods.go:89] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running
	I1018 10:32:55.476078  475717 system_pods.go:89] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:32:55.476106  475717 system_pods.go:89] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running
	I1018 10:32:55.476137  475717 system_pods.go:89] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:32:55.476166  475717 retry.go:31] will retry after 381.532323ms: missing components: kube-dns
	I1018 10:32:55.862882  475717 system_pods.go:86] 8 kube-system pods found
	I1018 10:32:55.862913  475717 system_pods.go:89] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:32:55.862923  475717 system_pods.go:89] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running
	I1018 10:32:55.862929  475717 system_pods.go:89] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:32:55.862934  475717 system_pods.go:89] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running
	I1018 10:32:55.862938  475717 system_pods.go:89] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running
	I1018 10:32:55.862942  475717 system_pods.go:89] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:32:55.862946  475717 system_pods.go:89] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running
	I1018 10:32:55.862949  475717 system_pods.go:89] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Running
	I1018 10:32:55.862957  475717 system_pods.go:126] duration metric: took 685.232981ms to wait for k8s-apps to be running ...
	I1018 10:32:55.862965  475717 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:32:55.863049  475717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:32:55.875973  475717 system_svc.go:56] duration metric: took 12.99867ms WaitForService to wait for kubelet
	I1018 10:32:55.876043  475717 kubeadm.go:586] duration metric: took 42.213135249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:32:55.876070  475717 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:32:55.879162  475717 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:32:55.879198  475717 node_conditions.go:123] node cpu capacity is 2
	I1018 10:32:55.879213  475717 node_conditions.go:105] duration metric: took 3.13636ms to run NodePressure ...
	I1018 10:32:55.879225  475717 start.go:241] waiting for startup goroutines ...
	I1018 10:32:55.879234  475717 start.go:246] waiting for cluster config update ...
	I1018 10:32:55.879245  475717 start.go:255] writing updated cluster config ...
	I1018 10:32:55.879522  475717 ssh_runner.go:195] Run: rm -f paused
	I1018 10:32:55.883001  475717 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:32:55.886681  475717 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hxrmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.892072  475717 pod_ready.go:94] pod "coredns-66bc5c9577-hxrmf" is "Ready"
	I1018 10:32:56.892105  475717 pod_ready.go:86] duration metric: took 1.005395763s for pod "coredns-66bc5c9577-hxrmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.895044  475717 pod_ready.go:83] waiting for pod "etcd-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.899125  475717 pod_ready.go:94] pod "etcd-embed-certs-101897" is "Ready"
	I1018 10:32:56.899149  475717 pod_ready.go:86] duration metric: took 4.077202ms for pod "etcd-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.901441  475717 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.906117  475717 pod_ready.go:94] pod "kube-apiserver-embed-certs-101897" is "Ready"
	I1018 10:32:56.906143  475717 pod_ready.go:86] duration metric: took 4.682648ms for pod "kube-apiserver-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:56.908874  475717 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:57.091345  475717 pod_ready.go:94] pod "kube-controller-manager-embed-certs-101897" is "Ready"
	I1018 10:32:57.091369  475717 pod_ready.go:86] duration metric: took 182.470232ms for pod "kube-controller-manager-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:57.290915  475717 pod_ready.go:83] waiting for pod "kube-proxy-bp45x" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:57.691020  475717 pod_ready.go:94] pod "kube-proxy-bp45x" is "Ready"
	I1018 10:32:57.691051  475717 pod_ready.go:86] duration metric: took 400.11253ms for pod "kube-proxy-bp45x" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:57.890767  475717 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:58.290375  475717 pod_ready.go:94] pod "kube-scheduler-embed-certs-101897" is "Ready"
	I1018 10:32:58.290405  475717 pod_ready.go:86] duration metric: took 399.604222ms for pod "kube-scheduler-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:32:58.290417  475717 pod_ready.go:40] duration metric: took 2.40738455s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:32:58.347215  475717 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:32:58.350765  475717 out.go:179] * Done! kubectl is now configured to use "embed-certs-101897" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 10:32:55 embed-certs-101897 crio[844]: time="2025-10-18T10:32:55.520373667Z" level=info msg="Created container 60cfcdb767ca0dfa18dd55c13c458f1847dc5eb6f2094b47077eb537b3f254e3: kube-system/coredns-66bc5c9577-hxrmf/coredns" id=a369afc3-fd44-4678-9bf4-7ed2592b8cee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:32:55 embed-certs-101897 crio[844]: time="2025-10-18T10:32:55.521107689Z" level=info msg="Starting container: 60cfcdb767ca0dfa18dd55c13c458f1847dc5eb6f2094b47077eb537b3f254e3" id=2b349aac-0794-4e15-8835-87c18273b932 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:32:55 embed-certs-101897 crio[844]: time="2025-10-18T10:32:55.525638713Z" level=info msg="Started container" PID=1726 containerID=60cfcdb767ca0dfa18dd55c13c458f1847dc5eb6f2094b47077eb537b3f254e3 description=kube-system/coredns-66bc5c9577-hxrmf/coredns id=2b349aac-0794-4e15-8835-87c18273b932 name=/runtime.v1.RuntimeService/StartContainer sandboxID=44ece2a4c4272511e5591f44b194bac746128205c31ca3f02d919b12df514f2f
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.903777464Z" level=info msg="Running pod sandbox: default/busybox/POD" id=583f2dfa-3926-4cec-bdef-935e82d6b237 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.903860664Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.908999596Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c87e484a7d7c195b8d4cd677748a26080464f57cc433ca22384ed20d01e42e48 UID:64611957-693c-42db-b15e-d2ca4cdf6692 NetNS:/var/run/netns/eded263a-a632-4c4f-9666-7d7d9efaa85d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000dd06b8}] Aliases:map[]}"
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.909167893Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.920718804Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c87e484a7d7c195b8d4cd677748a26080464f57cc433ca22384ed20d01e42e48 UID:64611957-693c-42db-b15e-d2ca4cdf6692 NetNS:/var/run/netns/eded263a-a632-4c4f-9666-7d7d9efaa85d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000dd06b8}] Aliases:map[]}"
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.9210115Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.924148631Z" level=info msg="Ran pod sandbox c87e484a7d7c195b8d4cd677748a26080464f57cc433ca22384ed20d01e42e48 with infra container: default/busybox/POD" id=583f2dfa-3926-4cec-bdef-935e82d6b237 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.926452982Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2135b791-cb78-4bee-beba-34bd8ad84d97 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.92673388Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2135b791-cb78-4bee-beba-34bd8ad84d97 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.926858902Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2135b791-cb78-4bee-beba-34bd8ad84d97 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.929513583Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a0c67e30-bf8d-4feb-a82a-a646c0855d07 name=/runtime.v1.ImageService/PullImage
	Oct 18 10:32:58 embed-certs-101897 crio[844]: time="2025-10-18T10:32:58.930979116Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.913902295Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a0c67e30-bf8d-4feb-a82a-a646c0855d07 name=/runtime.v1.ImageService/PullImage
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.914828672Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b275ab7d-4093-4ea3-b3d5-f7191a97cb93 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.918440589Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=424aa45d-8351-491b-b37b-811cecd2167d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.9257124Z" level=info msg="Creating container: default/busybox/busybox" id=a19d58db-2cf5-4e86-b412-5c1f900a4921 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.926537147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.93120094Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.931746759Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.946880655Z" level=info msg="Created container a4cb662b990dddc5f74fb43a4c6121592b92bd78b6b9cf13941fd50a306d4845: default/busybox/busybox" id=a19d58db-2cf5-4e86-b412-5c1f900a4921 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.949722768Z" level=info msg="Starting container: a4cb662b990dddc5f74fb43a4c6121592b92bd78b6b9cf13941fd50a306d4845" id=bc7b7b53-341a-4172-bbfd-06d9c7684bdd name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:33:00 embed-certs-101897 crio[844]: time="2025-10-18T10:33:00.952513771Z" level=info msg="Started container" PID=1786 containerID=a4cb662b990dddc5f74fb43a4c6121592b92bd78b6b9cf13941fd50a306d4845 description=default/busybox/busybox id=bc7b7b53-341a-4172-bbfd-06d9c7684bdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=c87e484a7d7c195b8d4cd677748a26080464f57cc433ca22384ed20d01e42e48
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a4cb662b990dd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   c87e484a7d7c1       busybox                                      default
	60cfcdb767ca0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   44ece2a4c4272       coredns-66bc5c9577-hxrmf                     kube-system
	a7962b633265d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   6dbe4a28943f2       storage-provisioner                          kube-system
	e6f4bace09c20       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   6e363abdc08ea       kube-proxy-bp45x                             kube-system
	fef5426250423       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   7f5457e751809       kindnet-qt6bn                                kube-system
	f38ee061f953c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   959d962c9d3b0       kube-scheduler-embed-certs-101897            kube-system
	74d33550a41c7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   da8f9a89ca0b1       etcd-embed-certs-101897                      kube-system
	069432324f85b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   0b66255af6f5a       kube-apiserver-embed-certs-101897            kube-system
	bfdc134a40277       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   9bd3271df1724       kube-controller-manager-embed-certs-101897   kube-system
	
	
	==> coredns [60cfcdb767ca0dfa18dd55c13c458f1847dc5eb6f2094b47077eb537b3f254e3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58991 - 21710 "HINFO IN 141700473480114935.794610955176226410. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.038581368s
	
	
	==> describe nodes <==
	Name:               embed-certs-101897
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-101897
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=embed-certs-101897
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_32_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:32:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-101897
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:32:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:32:55 +0000   Sat, 18 Oct 2025 10:31:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:32:55 +0000   Sat, 18 Oct 2025 10:31:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:32:55 +0000   Sat, 18 Oct 2025 10:31:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:32:55 +0000   Sat, 18 Oct 2025 10:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-101897
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ddfa9a95-8a31-40e5-b44e-f69ada911352
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-hxrmf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-101897                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-qt6bn                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-101897             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-101897    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-bp45x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-101897             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node embed-certs-101897 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-101897 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-101897 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-101897 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-101897 event: Registered Node embed-certs-101897 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-101897 status is now: NodeReady
	
	
	==> dmesg <==
	[ +35.463301] overlayfs: idmapped layers are currently not supported
	[Oct18 10:11] overlayfs: idmapped layers are currently not supported
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [74d33550a41c7ea0ac3889a097f1da775988da9534ca6475bfd0e8a5d6b91b55] <==
	{"level":"warn","ts":"2025-10-18T10:32:03.814060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:03.830869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:03.849072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:03.879910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:03.898402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:03.913842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:03.951665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:03.986738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.010961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.049995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.089207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.166228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.181549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.185475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.216498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.231696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.275814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.278652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.300047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.326890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.346640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.377848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.404106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.422332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:32:04.531665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54440","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:33:09 up  2:15,  0 user,  load average: 3.45, 3.88, 3.07
	Linux embed-certs-101897 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fef54262504237f6d6bb4b987c6d48a5434563614e89dded0c2178cd6c921c1f] <==
	I1018 10:32:14.512164       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:32:14.512442       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 10:32:14.512577       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:32:14.512588       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:32:14.512605       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:32:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:32:14.765838       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:32:14.765866       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:32:14.765875       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:32:14.766584       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:32:44.765955       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 10:32:44.767156       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:32:44.767246       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 10:32:44.767265       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1018 10:32:46.266656       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:32:46.266686       1 metrics.go:72] Registering metrics
	I1018 10:32:46.266755       1 controller.go:711] "Syncing nftables rules"
	I1018 10:32:54.773147       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 10:32:54.773207       1 main.go:301] handling current node
	I1018 10:33:04.766729       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 10:33:04.766769       1 main.go:301] handling current node
	
	
	==> kube-apiserver [069432324f85b633946ee35750538ba611e05434f134c0b55bbfbd9ac6d7e52b] <==
	I1018 10:32:05.819186       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:32:05.847211       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:32:05.860935       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:32:05.863777       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 10:32:05.891735       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:32:05.897931       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:32:05.897956       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 10:32:06.326584       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 10:32:06.334529       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 10:32:06.334555       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:32:07.236692       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:32:07.299707       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:32:07.418671       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 10:32:07.430414       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 10:32:07.431537       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:32:07.439086       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:32:07.663542       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:32:08.371883       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:32:08.388042       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 10:32:08.415478       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 10:32:12.856035       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:32:12.860931       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:32:13.116083       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:32:13.843325       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1018 10:33:07.707731       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:37826: use of closed network connection
	
	
	==> kube-controller-manager [bfdc134a402776e1b437e2950386b433f1f791b276442e6b42d9cbe6b4b9734f] <==
	I1018 10:32:12.701660       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 10:32:12.702963       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 10:32:12.703389       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 10:32:12.704436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:32:12.705936       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 10:32:12.708966       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 10:32:12.709916       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 10:32:12.709997       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 10:32:12.710027       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 10:32:12.710056       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 10:32:12.710401       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:32:12.711354       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 10:32:12.713052       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:32:12.713126       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 10:32:12.717615       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 10:32:12.717746       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:32:12.721040       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 10:32:12.723900       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-101897" podCIDRs=["10.244.0.0/24"]
	I1018 10:32:12.726158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:32:12.730336       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 10:32:12.731596       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:32:12.748048       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:32:12.748072       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:32:12.748088       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:32:57.691987       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e6f4bace09c20305817bf801d86bbb5db037bc28cced239d2ee7db13655ff28f] <==
	I1018 10:32:15.892772       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:32:15.982760       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:32:16.083003       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:32:16.083133       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 10:32:16.083288       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:32:16.108272       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:32:16.108320       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:32:16.112253       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:32:16.112732       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:32:16.112826       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:32:16.116063       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:32:16.116085       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:32:16.116414       1 config.go:200] "Starting service config controller"
	I1018 10:32:16.116429       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:32:16.117508       1 config.go:309] "Starting node config controller"
	I1018 10:32:16.117630       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:32:16.117661       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:32:16.123808       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:32:16.125235       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:32:16.216240       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 10:32:16.216789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 10:32:16.226224       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f38ee061f953c95b478dd67b25ce06e2dc837bfa312a52a0a37bca183f99db45] <==
	E1018 10:32:05.791844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 10:32:05.791921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 10:32:05.792001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 10:32:05.792131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 10:32:05.792297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 10:32:05.792387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 10:32:05.792461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 10:32:05.792491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 10:32:05.792507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 10:32:05.815162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 10:32:05.826086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 10:32:05.826273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 10:32:05.826369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 10:32:06.621721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 10:32:06.682740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 10:32:06.704396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 10:32:06.707169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 10:32:06.709504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 10:32:06.761118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 10:32:06.829218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 10:32:06.902015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 10:32:06.911147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 10:32:06.918716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 10:32:07.066645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 10:32:09.412638       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:32:12 embed-certs-101897 kubelet[1299]: I1018 10:32:12.766530    1299 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 10:32:12 embed-certs-101897 kubelet[1299]: I1018 10:32:12.767752    1299 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 10:32:13 embed-certs-101897 kubelet[1299]: E1018 10:32:13.997050    1299 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-101897\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-101897' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.000004    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e8f627be-9c95-40c3-9c90-959737c71fc9-cni-cfg\") pod \"kindnet-qt6bn\" (UID: \"e8f627be-9c95-40c3-9c90-959737c71fc9\") " pod="kube-system/kindnet-qt6bn"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.000035    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8f627be-9c95-40c3-9c90-959737c71fc9-lib-modules\") pod \"kindnet-qt6bn\" (UID: \"e8f627be-9c95-40c3-9c90-959737c71fc9\") " pod="kube-system/kindnet-qt6bn"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.000054    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpck6\" (UniqueName: \"kubernetes.io/projected/e8f627be-9c95-40c3-9c90-959737c71fc9-kube-api-access-wpck6\") pod \"kindnet-qt6bn\" (UID: \"e8f627be-9c95-40c3-9c90-959737c71fc9\") " pod="kube-system/kindnet-qt6bn"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.000077    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8f627be-9c95-40c3-9c90-959737c71fc9-xtables-lock\") pod \"kindnet-qt6bn\" (UID: \"e8f627be-9c95-40c3-9c90-959737c71fc9\") " pod="kube-system/kindnet-qt6bn"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.000093    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fb88f61-5197-4234-b157-2c84ed2dd0f3-lib-modules\") pod \"kube-proxy-bp45x\" (UID: \"1fb88f61-5197-4234-b157-2c84ed2dd0f3\") " pod="kube-system/kube-proxy-bp45x"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.000108    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6btn\" (UniqueName: \"kubernetes.io/projected/1fb88f61-5197-4234-b157-2c84ed2dd0f3-kube-api-access-q6btn\") pod \"kube-proxy-bp45x\" (UID: \"1fb88f61-5197-4234-b157-2c84ed2dd0f3\") " pod="kube-system/kube-proxy-bp45x"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.000139    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1fb88f61-5197-4234-b157-2c84ed2dd0f3-kube-proxy\") pod \"kube-proxy-bp45x\" (UID: \"1fb88f61-5197-4234-b157-2c84ed2dd0f3\") " pod="kube-system/kube-proxy-bp45x"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.000156    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fb88f61-5197-4234-b157-2c84ed2dd0f3-xtables-lock\") pod \"kube-proxy-bp45x\" (UID: \"1fb88f61-5197-4234-b157-2c84ed2dd0f3\") " pod="kube-system/kube-proxy-bp45x"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.125419    1299 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: W1018 10:32:14.282659    1299 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/crio-7f5457e751809030372a42f560d76ad87bd558d44be13fab212e5a7e4ea7c2a7 WatchSource:0}: Error finding container 7f5457e751809030372a42f560d76ad87bd558d44be13fab212e5a7e4ea7c2a7: Status 404 returned error can't find the container with id 7f5457e751809030372a42f560d76ad87bd558d44be13fab212e5a7e4ea7c2a7
	Oct 18 10:32:14 embed-certs-101897 kubelet[1299]: I1018 10:32:14.508223    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qt6bn" podStartSLOduration=1.50820249 podStartE2EDuration="1.50820249s" podCreationTimestamp="2025-10-18 10:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:32:14.5069664 +0000 UTC m=+6.314581736" watchObservedRunningTime="2025-10-18 10:32:14.50820249 +0000 UTC m=+6.315817850"
	Oct 18 10:32:15 embed-certs-101897 kubelet[1299]: E1018 10:32:15.102174    1299 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 10:32:15 embed-certs-101897 kubelet[1299]: E1018 10:32:15.102308    1299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1fb88f61-5197-4234-b157-2c84ed2dd0f3-kube-proxy podName:1fb88f61-5197-4234-b157-2c84ed2dd0f3 nodeName:}" failed. No retries permitted until 2025-10-18 10:32:15.602281592 +0000 UTC m=+7.409896910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1fb88f61-5197-4234-b157-2c84ed2dd0f3-kube-proxy") pod "kube-proxy-bp45x" (UID: "1fb88f61-5197-4234-b157-2c84ed2dd0f3") : failed to sync configmap cache: timed out waiting for the condition
	Oct 18 10:32:17 embed-certs-101897 kubelet[1299]: I1018 10:32:17.466396    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bp45x" podStartSLOduration=4.466379101 podStartE2EDuration="4.466379101s" podCreationTimestamp="2025-10-18 10:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:32:16.500469557 +0000 UTC m=+8.308084892" watchObservedRunningTime="2025-10-18 10:32:17.466379101 +0000 UTC m=+9.273994428"
	Oct 18 10:32:55 embed-certs-101897 kubelet[1299]: I1018 10:32:55.059677    1299 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 10:32:55 embed-certs-101897 kubelet[1299]: I1018 10:32:55.236150    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0afa9baa-7349-44ad-ab0d-5a8cf04751c4-config-volume\") pod \"coredns-66bc5c9577-hxrmf\" (UID: \"0afa9baa-7349-44ad-ab0d-5a8cf04751c4\") " pod="kube-system/coredns-66bc5c9577-hxrmf"
	Oct 18 10:32:55 embed-certs-101897 kubelet[1299]: I1018 10:32:55.236215    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ccjt\" (UniqueName: \"kubernetes.io/projected/0afa9baa-7349-44ad-ab0d-5a8cf04751c4-kube-api-access-4ccjt\") pod \"coredns-66bc5c9577-hxrmf\" (UID: \"0afa9baa-7349-44ad-ab0d-5a8cf04751c4\") " pod="kube-system/coredns-66bc5c9577-hxrmf"
	Oct 18 10:32:55 embed-certs-101897 kubelet[1299]: I1018 10:32:55.236256    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0d449f69-e21a-40a5-8c77-65c4665a58f5-tmp\") pod \"storage-provisioner\" (UID: \"0d449f69-e21a-40a5-8c77-65c4665a58f5\") " pod="kube-system/storage-provisioner"
	Oct 18 10:32:55 embed-certs-101897 kubelet[1299]: I1018 10:32:55.236282    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v8xr\" (UniqueName: \"kubernetes.io/projected/0d449f69-e21a-40a5-8c77-65c4665a58f5-kube-api-access-9v8xr\") pod \"storage-provisioner\" (UID: \"0d449f69-e21a-40a5-8c77-65c4665a58f5\") " pod="kube-system/storage-provisioner"
	Oct 18 10:32:55 embed-certs-101897 kubelet[1299]: I1018 10:32:55.620332    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.620296393 podStartE2EDuration="41.620296393s" podCreationTimestamp="2025-10-18 10:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:32:55.605418902 +0000 UTC m=+47.413034229" watchObservedRunningTime="2025-10-18 10:32:55.620296393 +0000 UTC m=+47.427911711"
	Oct 18 10:32:56 embed-certs-101897 kubelet[1299]: I1018 10:32:56.593517    1299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hxrmf" podStartSLOduration=43.593496309 podStartE2EDuration="43.593496309s" podCreationTimestamp="2025-10-18 10:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:32:55.62177621 +0000 UTC m=+47.429391538" watchObservedRunningTime="2025-10-18 10:32:56.593496309 +0000 UTC m=+48.401111627"
	Oct 18 10:32:58 embed-certs-101897 kubelet[1299]: I1018 10:32:58.759115    1299 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs2w6\" (UniqueName: \"kubernetes.io/projected/64611957-693c-42db-b15e-d2ca4cdf6692-kube-api-access-fs2w6\") pod \"busybox\" (UID: \"64611957-693c-42db-b15e-d2ca4cdf6692\") " pod="default/busybox"
	
	
	==> storage-provisioner [a7962b633265d93513eb859503a4eefb312915a3deb4ac9f6b5eddbad5ca6d95] <==
	I1018 10:32:55.523200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 10:32:55.591153       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:32:55.597440       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 10:32:55.605657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:55.619117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:32:55.623524       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:32:55.623802       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-101897_28c14c1b-7571-47a5-8f4b-0abd5b045778!
	I1018 10:32:55.625732       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3b672bc-ac74-4ae1-9e75-a8332f5a8fca", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-101897_28c14c1b-7571-47a5-8f4b-0abd5b045778 became leader
	W1018 10:32:55.629796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:55.636105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:32:55.725101       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-101897_28c14c1b-7571-47a5-8f4b-0abd5b045778!
	W1018 10:32:57.639176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:57.643599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:59.647110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:32:59.651853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:01.655068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:01.659753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:03.662856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:03.667132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:05.670981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:05.678798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:07.688248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:33:07.698348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-101897 -n embed-certs-101897
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-101897 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-715182 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-715182 --alsologtostderr -v=1: exit status 80 (2.339051442s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-715182 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:34:21.240692  486294 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:34:21.240867  486294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:21.240879  486294 out.go:374] Setting ErrFile to fd 2...
	I1018 10:34:21.240885  486294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:21.241394  486294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:34:21.241759  486294 out.go:368] Setting JSON to false
	I1018 10:34:21.241807  486294 mustload.go:65] Loading cluster: default-k8s-diff-port-715182
	I1018 10:34:21.242943  486294 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:34:21.243530  486294 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:34:21.261977  486294 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:34:21.262304  486294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:34:21.336598  486294 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 10:34:21.326523218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:34:21.337325  486294 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-715182 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 10:34:21.340919  486294 out.go:179] * Pausing node default-k8s-diff-port-715182 ... 
	I1018 10:34:21.344077  486294 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:34:21.344454  486294 ssh_runner.go:195] Run: systemctl --version
	I1018 10:34:21.344505  486294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:34:21.367319  486294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:34:21.472655  486294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:34:21.508325  486294 pause.go:52] kubelet running: true
	I1018 10:34:21.508433  486294 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:34:21.782488  486294 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:34:21.782572  486294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:34:21.865742  486294 cri.go:89] found id: "3927daefbc902bf8510c1cdc5663cd63c2b5f4102fb088ba05776992e0758eed"
	I1018 10:34:21.865767  486294 cri.go:89] found id: "8be7e98a5e3a38ee08d839bf119f903facc309086fe2401359c1cda7829fdc9a"
	I1018 10:34:21.865772  486294 cri.go:89] found id: "1e44f1527d99187a2cfb9fe74a914deb93372eeeee161687d7f9c60126af645c"
	I1018 10:34:21.865776  486294 cri.go:89] found id: "5f57f3b79a65261acd514378b0cfe0de5a23d594bd4cb2d6e4f39b8be06c40eb"
	I1018 10:34:21.865780  486294 cri.go:89] found id: "826c12b3cbdbbe27f0afcdc885a68c29c788841c412a5f5620cccaaa4752469b"
	I1018 10:34:21.865784  486294 cri.go:89] found id: "8ac924f2c8ba493d59bc3b60efaa16f38faf443aab37d62f891a1809134404cc"
	I1018 10:34:21.865787  486294 cri.go:89] found id: "a31ff6775bd9d5e70b70f707aacc8fb2ae23fea962bd975f954a8c39da5690e9"
	I1018 10:34:21.865791  486294 cri.go:89] found id: "dfb7c0f4f545b605ddeb04490e8932c6ffb5e4afea7c622fa1cf23b6e8f53ed7"
	I1018 10:34:21.865794  486294 cri.go:89] found id: "5e58508b5c574d2cb44d4c48ef46f9795889437a9e76ee1a3215fd6336add58e"
	I1018 10:34:21.865808  486294 cri.go:89] found id: "58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731"
	I1018 10:34:21.865812  486294 cri.go:89] found id: "fbcfeb81c2450f4d9bcac716f9e0712d078fee0206275c2807833c4628397eaa"
	I1018 10:34:21.865816  486294 cri.go:89] found id: ""
	I1018 10:34:21.865872  486294 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:34:21.886047  486294 retry.go:31] will retry after 172.981721ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:34:21Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:34:22.059583  486294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:34:22.074134  486294 pause.go:52] kubelet running: false
	I1018 10:34:22.074249  486294 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:34:22.244113  486294 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:34:22.244202  486294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:34:22.318236  486294 cri.go:89] found id: "3927daefbc902bf8510c1cdc5663cd63c2b5f4102fb088ba05776992e0758eed"
	I1018 10:34:22.318261  486294 cri.go:89] found id: "8be7e98a5e3a38ee08d839bf119f903facc309086fe2401359c1cda7829fdc9a"
	I1018 10:34:22.318267  486294 cri.go:89] found id: "1e44f1527d99187a2cfb9fe74a914deb93372eeeee161687d7f9c60126af645c"
	I1018 10:34:22.318271  486294 cri.go:89] found id: "5f57f3b79a65261acd514378b0cfe0de5a23d594bd4cb2d6e4f39b8be06c40eb"
	I1018 10:34:22.318291  486294 cri.go:89] found id: "826c12b3cbdbbe27f0afcdc885a68c29c788841c412a5f5620cccaaa4752469b"
	I1018 10:34:22.318295  486294 cri.go:89] found id: "8ac924f2c8ba493d59bc3b60efaa16f38faf443aab37d62f891a1809134404cc"
	I1018 10:34:22.318299  486294 cri.go:89] found id: "a31ff6775bd9d5e70b70f707aacc8fb2ae23fea962bd975f954a8c39da5690e9"
	I1018 10:34:22.318302  486294 cri.go:89] found id: "dfb7c0f4f545b605ddeb04490e8932c6ffb5e4afea7c622fa1cf23b6e8f53ed7"
	I1018 10:34:22.318305  486294 cri.go:89] found id: "5e58508b5c574d2cb44d4c48ef46f9795889437a9e76ee1a3215fd6336add58e"
	I1018 10:34:22.318312  486294 cri.go:89] found id: "58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731"
	I1018 10:34:22.318318  486294 cri.go:89] found id: "fbcfeb81c2450f4d9bcac716f9e0712d078fee0206275c2807833c4628397eaa"
	I1018 10:34:22.318321  486294 cri.go:89] found id: ""
	I1018 10:34:22.318371  486294 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:34:22.329695  486294 retry.go:31] will retry after 246.582018ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:34:22Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:34:22.577219  486294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:34:22.592039  486294 pause.go:52] kubelet running: false
	I1018 10:34:22.592186  486294 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:34:22.771712  486294 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:34:22.771791  486294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:34:22.844983  486294 cri.go:89] found id: "3927daefbc902bf8510c1cdc5663cd63c2b5f4102fb088ba05776992e0758eed"
	I1018 10:34:22.845009  486294 cri.go:89] found id: "8be7e98a5e3a38ee08d839bf119f903facc309086fe2401359c1cda7829fdc9a"
	I1018 10:34:22.845015  486294 cri.go:89] found id: "1e44f1527d99187a2cfb9fe74a914deb93372eeeee161687d7f9c60126af645c"
	I1018 10:34:22.845019  486294 cri.go:89] found id: "5f57f3b79a65261acd514378b0cfe0de5a23d594bd4cb2d6e4f39b8be06c40eb"
	I1018 10:34:22.845022  486294 cri.go:89] found id: "826c12b3cbdbbe27f0afcdc885a68c29c788841c412a5f5620cccaaa4752469b"
	I1018 10:34:22.845026  486294 cri.go:89] found id: "8ac924f2c8ba493d59bc3b60efaa16f38faf443aab37d62f891a1809134404cc"
	I1018 10:34:22.845029  486294 cri.go:89] found id: "a31ff6775bd9d5e70b70f707aacc8fb2ae23fea962bd975f954a8c39da5690e9"
	I1018 10:34:22.845032  486294 cri.go:89] found id: "dfb7c0f4f545b605ddeb04490e8932c6ffb5e4afea7c622fa1cf23b6e8f53ed7"
	I1018 10:34:22.845035  486294 cri.go:89] found id: "5e58508b5c574d2cb44d4c48ef46f9795889437a9e76ee1a3215fd6336add58e"
	I1018 10:34:22.845042  486294 cri.go:89] found id: "58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731"
	I1018 10:34:22.845045  486294 cri.go:89] found id: "fbcfeb81c2450f4d9bcac716f9e0712d078fee0206275c2807833c4628397eaa"
	I1018 10:34:22.845049  486294 cri.go:89] found id: ""
	I1018 10:34:22.845098  486294 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:34:22.859160  486294 retry.go:31] will retry after 338.760081ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:34:22Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:34:23.198439  486294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:34:23.215037  486294 pause.go:52] kubelet running: false
	I1018 10:34:23.215111  486294 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:34:23.399165  486294 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:34:23.399299  486294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:34:23.483755  486294 cri.go:89] found id: "3927daefbc902bf8510c1cdc5663cd63c2b5f4102fb088ba05776992e0758eed"
	I1018 10:34:23.483791  486294 cri.go:89] found id: "8be7e98a5e3a38ee08d839bf119f903facc309086fe2401359c1cda7829fdc9a"
	I1018 10:34:23.483797  486294 cri.go:89] found id: "1e44f1527d99187a2cfb9fe74a914deb93372eeeee161687d7f9c60126af645c"
	I1018 10:34:23.483816  486294 cri.go:89] found id: "5f57f3b79a65261acd514378b0cfe0de5a23d594bd4cb2d6e4f39b8be06c40eb"
	I1018 10:34:23.483838  486294 cri.go:89] found id: "826c12b3cbdbbe27f0afcdc885a68c29c788841c412a5f5620cccaaa4752469b"
	I1018 10:34:23.483856  486294 cri.go:89] found id: "8ac924f2c8ba493d59bc3b60efaa16f38faf443aab37d62f891a1809134404cc"
	I1018 10:34:23.483859  486294 cri.go:89] found id: "a31ff6775bd9d5e70b70f707aacc8fb2ae23fea962bd975f954a8c39da5690e9"
	I1018 10:34:23.483862  486294 cri.go:89] found id: "dfb7c0f4f545b605ddeb04490e8932c6ffb5e4afea7c622fa1cf23b6e8f53ed7"
	I1018 10:34:23.483865  486294 cri.go:89] found id: "5e58508b5c574d2cb44d4c48ef46f9795889437a9e76ee1a3215fd6336add58e"
	I1018 10:34:23.483871  486294 cri.go:89] found id: "58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731"
	I1018 10:34:23.483874  486294 cri.go:89] found id: "fbcfeb81c2450f4d9bcac716f9e0712d078fee0206275c2807833c4628397eaa"
	I1018 10:34:23.483877  486294 cri.go:89] found id: ""
	I1018 10:34:23.483946  486294 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:34:23.505916  486294 out.go:203] 
	W1018 10:34:23.508779  486294 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:34:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:34:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 10:34:23.508803  486294 out.go:285] * 
	* 
	W1018 10:34:23.517096  486294 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 10:34:23.520270  486294 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-715182 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-715182
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-715182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f",
	        "Created": "2025-10-18T10:31:31.395284928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482021,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:33:17.333086669Z",
	            "FinishedAt": "2025-10-18T10:33:16.546909075Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/hosts",
	        "LogPath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f-json.log",
	        "Name": "/default-k8s-diff-port-715182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-715182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-715182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f",
	                "LowerDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-715182",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-715182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-715182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-715182",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-715182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6dd78a606a67c4be0ef2b59a56eb7bde5512908426b68f7f0fc78eb23724df82",
	            "SandboxKey": "/var/run/docker/netns/6dd78a606a67",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-715182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:29:ad:a3:a0:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "788491100ff23209b4a58b30f7bb3bc0737bdeee77d901da545d647f4fa241c9",
	                    "EndpointID": "3d4432009ac5b6cdcc6b8a93c1c8cf04bccf9271f326a900aa4200159a033d85",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-715182",
	                        "2afd5447007b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182: exit status 2 (411.604751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-715182 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-715182 logs -n 25: (1.40603759s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-233372 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-233372                                                                                                                                                                                                                        │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │                     │
	│ stop    │ -p old-k8s-version-309062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │ 18 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-309062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-309062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-309062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-733799                                                                                                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-715182 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-101897 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-715182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-101897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ image   │ default-k8s-diff-port-715182 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:33:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:33:22.456245  482683 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:33:22.456379  482683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:33:22.456390  482683 out.go:374] Setting ErrFile to fd 2...
	I1018 10:33:22.456395  482683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:33:22.456670  482683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:33:22.457027  482683 out.go:368] Setting JSON to false
	I1018 10:33:22.457986  482683 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8153,"bootTime":1760775450,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:33:22.458053  482683 start.go:141] virtualization:  
	I1018 10:33:22.462892  482683 out.go:179] * [embed-certs-101897] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:33:22.466048  482683 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:33:22.466086  482683 notify.go:220] Checking for updates...
	I1018 10:33:22.471842  482683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:33:22.474691  482683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:22.477622  482683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:33:22.480468  482683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:33:22.483387  482683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:33:22.486732  482683 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:22.487347  482683 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:33:22.522063  482683 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:33:22.522178  482683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:33:22.604420  482683 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 10:33:22.595338658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:33:22.604528  482683 docker.go:318] overlay module found
	I1018 10:33:22.607717  482683 out.go:179] * Using the docker driver based on existing profile
	I1018 10:33:22.610589  482683 start.go:305] selected driver: docker
	I1018 10:33:22.610612  482683 start.go:925] validating driver "docker" against &{Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:33:22.610721  482683 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:33:22.611415  482683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:33:22.697919  482683 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 10:33:22.68877142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:33:22.698293  482683 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:33:22.698327  482683 cni.go:84] Creating CNI manager for ""
	I1018 10:33:22.698384  482683 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:33:22.698433  482683 start.go:349] cluster config:
	{Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:33:22.703366  482683 out.go:179] * Starting "embed-certs-101897" primary control-plane node in "embed-certs-101897" cluster
	I1018 10:33:22.710028  482683 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:33:22.713990  482683 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:33:22.717823  482683 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:33:22.717891  482683 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:33:22.717905  482683 cache.go:58] Caching tarball of preloaded images
	I1018 10:33:22.717926  482683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:33:22.717994  482683 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:33:22.718002  482683 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:33:22.718111  482683 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json ...
	I1018 10:33:22.744876  482683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:33:22.744895  482683 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:33:22.744909  482683 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:33:22.744930  482683 start.go:360] acquireMachinesLock for embed-certs-101897: {Name:mkdf4f50051bf510e5fec7789d20200884d252f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:33:22.744988  482683 start.go:364] duration metric: took 37.186µs to acquireMachinesLock for "embed-certs-101897"
	I1018 10:33:22.745007  482683 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:33:22.745012  482683 fix.go:54] fixHost starting: 
	I1018 10:33:22.745525  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:22.771013  482683 fix.go:112] recreateIfNeeded on embed-certs-101897: state=Stopped err=<nil>
	W1018 10:33:22.771040  482683 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 10:33:22.113078  481899 provision.go:177] copyRemoteCerts
	I1018 10:33:22.113246  481899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:33:22.113313  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:22.134795  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:22.264397  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:33:22.287573  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 10:33:22.310509  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:33:22.333966  481899 provision.go:87] duration metric: took 1.216422883s to configureAuth
	I1018 10:33:22.333987  481899 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:33:22.334184  481899 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:22.334288  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:22.352970  481899 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:22.353365  481899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1018 10:33:22.353389  481899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:33:22.740883  481899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:33:22.740903  481899 machine.go:96] duration metric: took 5.133739169s to provisionDockerMachine
	I1018 10:33:22.740914  481899 start.go:293] postStartSetup for "default-k8s-diff-port-715182" (driver="docker")
	I1018 10:33:22.740925  481899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:33:22.740997  481899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:33:22.741039  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:22.770861  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:22.879271  481899 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:33:22.885964  481899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:33:22.886048  481899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:33:22.886083  481899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:33:22.886166  481899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:33:22.886283  481899 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:33:22.886447  481899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:33:22.905944  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:33:22.938690  481899 start.go:296] duration metric: took 197.760489ms for postStartSetup
	I1018 10:33:22.938805  481899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:33:22.938880  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:22.966493  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:23.072542  481899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:33:23.080512  481899 fix.go:56] duration metric: took 5.800716249s for fixHost
	I1018 10:33:23.080614  481899 start.go:83] releasing machines lock for "default-k8s-diff-port-715182", held for 5.800834535s
	I1018 10:33:23.080765  481899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-715182
	I1018 10:33:23.114756  481899 ssh_runner.go:195] Run: cat /version.json
	I1018 10:33:23.114805  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:23.115048  481899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:33:23.115110  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:23.181799  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:23.189316  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:23.309772  481899 ssh_runner.go:195] Run: systemctl --version
	I1018 10:33:23.416534  481899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:33:23.483918  481899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:33:23.489842  481899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:33:23.489979  481899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:33:23.502035  481899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:33:23.502110  481899 start.go:495] detecting cgroup driver to use...
	I1018 10:33:23.502157  481899 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:33:23.502240  481899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:33:23.519500  481899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:33:23.537748  481899 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:33:23.537811  481899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:33:23.559022  481899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:33:23.578654  481899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:33:23.775065  481899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:33:23.939442  481899 docker.go:234] disabling docker service ...
	I1018 10:33:23.939511  481899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:33:23.961311  481899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:33:23.978157  481899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:33:24.100733  481899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:33:24.221075  481899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:33:24.235656  481899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:33:24.251306  481899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:33:24.251393  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.260704  481899 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:33:24.260774  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.269946  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.279192  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.288214  481899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:33:24.296532  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.305777  481899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.314395  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.323060  481899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:33:24.330682  481899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:33:24.338036  481899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:24.450500  481899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:33:24.592261  481899 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:33:24.592331  481899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:33:24.596623  481899 start.go:563] Will wait 60s for crictl version
	I1018 10:33:24.596703  481899 ssh_runner.go:195] Run: which crictl
	I1018 10:33:24.600587  481899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:33:24.626336  481899 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:33:24.626428  481899 ssh_runner.go:195] Run: crio --version
	I1018 10:33:24.653313  481899 ssh_runner.go:195] Run: crio --version
	I1018 10:33:24.687809  481899 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:33:24.690560  481899 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-715182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:33:24.707135  481899 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:33:24.711304  481899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:33:24.721222  481899 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-715182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:33:24.721339  481899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:33:24.721406  481899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:33:24.759210  481899 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:33:24.759230  481899 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:33:24.759286  481899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:33:24.788102  481899 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:33:24.788140  481899 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:33:24.788148  481899 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1018 10:33:24.788254  481899 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-715182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:33:24.788336  481899 ssh_runner.go:195] Run: crio config
	I1018 10:33:24.858184  481899 cni.go:84] Creating CNI manager for ""
	I1018 10:33:24.858205  481899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:33:24.858228  481899 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:33:24.858250  481899 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715182 NodeName:default-k8s-diff-port-715182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:33:24.858377  481899 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715182"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:33:24.858457  481899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:33:24.866097  481899 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:33:24.866215  481899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:33:24.873732  481899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 10:33:24.886358  481899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:33:24.898994  481899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 10:33:24.911793  481899 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:33:24.915471  481899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:33:24.925153  481899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:25.036261  481899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:33:25.056632  481899 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182 for IP: 192.168.76.2
	I1018 10:33:25.056655  481899 certs.go:195] generating shared ca certs ...
	I1018 10:33:25.056672  481899 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:25.056868  481899 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:33:25.056942  481899 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:33:25.056957  481899 certs.go:257] generating profile certs ...
	I1018 10:33:25.057068  481899 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.key
	I1018 10:33:25.057154  481899 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d
	I1018 10:33:25.057289  481899 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key
	I1018 10:33:25.057451  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:33:25.057496  481899 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:33:25.057611  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:33:25.057648  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:33:25.057709  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:33:25.057739  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:33:25.057811  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:33:25.058577  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:33:25.084425  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:33:25.109005  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:33:25.130853  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:33:25.150248  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 10:33:25.169063  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:33:25.195042  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:33:25.218873  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:33:25.254355  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:33:25.276479  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:33:25.296500  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:33:25.316253  481899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:33:25.334949  481899 ssh_runner.go:195] Run: openssl version
	I1018 10:33:25.341688  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:33:25.350932  481899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:33:25.354937  481899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:33:25.355037  481899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:33:25.403384  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:33:25.411970  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:33:25.420939  481899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:33:25.424799  481899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:33:25.424901  481899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:33:25.466286  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:33:25.474363  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:33:25.482886  481899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:25.486887  481899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:25.486959  481899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:25.530152  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:33:25.538336  481899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:33:25.542278  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:33:25.583956  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:33:25.625718  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:33:25.667976  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:33:25.711901  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:33:25.768834  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:33:25.839983  481899 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-715182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:33:25.840148  481899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:33:25.840245  481899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:33:25.896365  481899 cri.go:89] found id: "8ac924f2c8ba493d59bc3b60efaa16f38faf443aab37d62f891a1809134404cc"
	I1018 10:33:25.896388  481899 cri.go:89] found id: "a31ff6775bd9d5e70b70f707aacc8fb2ae23fea962bd975f954a8c39da5690e9"
	I1018 10:33:25.896402  481899 cri.go:89] found id: "dfb7c0f4f545b605ddeb04490e8932c6ffb5e4afea7c622fa1cf23b6e8f53ed7"
	I1018 10:33:25.896406  481899 cri.go:89] found id: "5e58508b5c574d2cb44d4c48ef46f9795889437a9e76ee1a3215fd6336add58e"
	I1018 10:33:25.896410  481899 cri.go:89] found id: ""
	I1018 10:33:25.896461  481899 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:33:25.916166  481899 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:33:25Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:33:25.916251  481899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:33:25.928022  481899 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:33:25.928043  481899 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:33:25.928095  481899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:33:25.940307  481899 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:33:25.940747  481899 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-715182" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:25.940850  481899 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-293333/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-715182" cluster setting kubeconfig missing "default-k8s-diff-port-715182" context setting]
	I1018 10:33:25.941118  481899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:25.942671  481899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:33:25.957216  481899 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 10:33:25.957262  481899 kubeadm.go:601] duration metric: took 29.201389ms to restartPrimaryControlPlane
	I1018 10:33:25.957272  481899 kubeadm.go:402] duration metric: took 117.29951ms to StartCluster
	I1018 10:33:25.957287  481899 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:25.957350  481899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:25.957945  481899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:25.958147  481899 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:33:25.958445  481899 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:25.958495  481899 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:33:25.958560  481899 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715182"
	I1018 10:33:25.958574  481899 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-715182"
	W1018 10:33:25.958583  481899 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:33:25.958652  481899 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:33:25.958603  481899 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-715182"
	I1018 10:33:25.958704  481899 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-715182"
	W1018 10:33:25.958710  481899 addons.go:247] addon dashboard should already be in state true
	I1018 10:33:25.958726  481899 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:33:25.959184  481899 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:33:25.958610  481899 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715182"
	I1018 10:33:25.959724  481899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715182"
	I1018 10:33:25.959981  481899 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:33:25.960479  481899 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:33:25.966725  481899 out.go:179] * Verifying Kubernetes components...
	I1018 10:33:25.970478  481899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:26.028767  481899 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-715182"
	W1018 10:33:26.028798  481899 addons.go:247] addon default-storageclass should already be in state true
	I1018 10:33:26.028825  481899 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:33:26.029305  481899 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:33:26.031048  481899 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 10:33:26.031222  481899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:33:26.035023  481899 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 10:33:26.035129  481899 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:33:26.035139  481899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:33:26.035205  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:26.038208  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 10:33:26.038264  481899 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 10:33:26.038340  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:26.078981  481899 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:33:26.079007  481899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:33:26.079073  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:26.093295  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:26.113394  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:26.135162  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:26.336293  481899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:33:26.390838  481899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:33:26.397133  481899 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715182" to be "Ready" ...
	I1018 10:33:26.414729  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 10:33:26.414765  481899 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 10:33:26.507584  481899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:33:26.557322  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 10:33:26.557345  481899 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 10:33:26.612290  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 10:33:26.612312  481899 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 10:33:26.683498  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 10:33:26.683516  481899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 10:33:26.734551  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 10:33:26.734572  481899 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 10:33:26.762925  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 10:33:26.762949  481899 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 10:33:26.780941  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 10:33:26.780963  481899 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 10:33:26.797730  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 10:33:26.797750  481899 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 10:33:26.819334  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:33:26.819355  481899 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 10:33:26.836598  481899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:33:22.774598  482683 out.go:252] * Restarting existing docker container for "embed-certs-101897" ...
	I1018 10:33:22.774683  482683 cli_runner.go:164] Run: docker start embed-certs-101897
	I1018 10:33:23.100064  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:23.155129  482683 kic.go:430] container "embed-certs-101897" state is running.
	I1018 10:33:23.155518  482683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:33:23.201513  482683 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json ...
	I1018 10:33:23.201765  482683 machine.go:93] provisionDockerMachine start ...
	I1018 10:33:23.201832  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:23.240040  482683 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:23.240381  482683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1018 10:33:23.240396  482683 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:33:23.241146  482683 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 10:33:26.432921  482683 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-101897
	
	I1018 10:33:26.433002  482683 ubuntu.go:182] provisioning hostname "embed-certs-101897"
	I1018 10:33:26.433097  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:26.458836  482683 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:26.459144  482683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1018 10:33:26.459155  482683 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-101897 && echo "embed-certs-101897" | sudo tee /etc/hostname
	I1018 10:33:26.682618  482683 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-101897
	
	I1018 10:33:26.682702  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:26.710846  482683 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:26.711167  482683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1018 10:33:26.711189  482683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-101897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-101897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-101897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:33:26.901953  482683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:33:26.902019  482683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:33:26.902063  482683 ubuntu.go:190] setting up certificates
	I1018 10:33:26.902102  482683 provision.go:84] configureAuth start
	I1018 10:33:26.902181  482683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:33:26.933392  482683 provision.go:143] copyHostCerts
	I1018 10:33:26.933456  482683 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:33:26.933473  482683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:33:26.933554  482683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:33:26.933653  482683 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:33:26.933659  482683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:33:26.933688  482683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:33:26.933774  482683 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:33:26.933779  482683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:33:26.933803  482683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:33:26.933847  482683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.embed-certs-101897 san=[127.0.0.1 192.168.85.2 embed-certs-101897 localhost minikube]
	I1018 10:33:27.472247  482683 provision.go:177] copyRemoteCerts
	I1018 10:33:27.472314  482683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:33:27.472361  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:27.490593  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:27.602287  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:33:27.631241  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 10:33:27.663225  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:33:27.696386  482683 provision.go:87] duration metric: took 794.244039ms to configureAuth
	I1018 10:33:27.696416  482683 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:33:27.696608  482683 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:27.696721  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:27.719751  482683 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:27.720075  482683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1018 10:33:27.720095  482683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:33:28.158756  482683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:33:28.158824  482683 machine.go:96] duration metric: took 4.957048428s to provisionDockerMachine
	I1018 10:33:28.158875  482683 start.go:293] postStartSetup for "embed-certs-101897" (driver="docker")
	I1018 10:33:28.158913  482683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:33:28.158993  482683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:33:28.159058  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:28.189382  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:28.322365  482683 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:33:28.326251  482683 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:33:28.326276  482683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:33:28.326287  482683 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:33:28.326338  482683 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:33:28.326409  482683 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:33:28.326511  482683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:33:28.344601  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:33:28.378249  482683 start.go:296] duration metric: took 219.33154ms for postStartSetup
	I1018 10:33:28.378347  482683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:33:28.378423  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:28.412022  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:28.536313  482683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:33:28.542363  482683 fix.go:56] duration metric: took 5.797343135s for fixHost
	I1018 10:33:28.542390  482683 start.go:83] releasing machines lock for "embed-certs-101897", held for 5.797394359s
	I1018 10:33:28.542459  482683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:33:28.570883  482683 ssh_runner.go:195] Run: cat /version.json
	I1018 10:33:28.570955  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:28.571197  482683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:33:28.571252  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:28.590811  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:28.613228  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:28.721821  482683 ssh_runner.go:195] Run: systemctl --version
	I1018 10:33:28.843430  482683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:33:28.919516  482683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:33:28.930892  482683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:33:28.930977  482683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:33:28.939965  482683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:33:28.939997  482683 start.go:495] detecting cgroup driver to use...
	I1018 10:33:28.940030  482683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:33:28.940120  482683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:33:28.968096  482683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:33:28.991338  482683 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:33:28.991409  482683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:33:29.007646  482683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:33:29.023331  482683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:33:29.234650  482683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:33:29.415972  482683 docker.go:234] disabling docker service ...
	I1018 10:33:29.416035  482683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:33:29.437858  482683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:33:29.453594  482683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:33:29.681605  482683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:33:29.877769  482683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:33:29.903400  482683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:33:29.931788  482683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:33:29.931896  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.941292  482683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:33:29.941439  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.951039  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.960455  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.969884  482683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:33:29.978614  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.988202  482683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.997485  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:30.006926  482683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:33:30.017623  482683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:33:30.028432  482683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:30.245686  482683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:33:30.411266  482683 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:33:30.411372  482683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:33:30.420317  482683 start.go:563] Will wait 60s for crictl version
	I1018 10:33:30.420394  482683 ssh_runner.go:195] Run: which crictl
	I1018 10:33:30.424481  482683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:33:30.471601  482683 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:33:30.471710  482683 ssh_runner.go:195] Run: crio --version
	I1018 10:33:30.538034  482683 ssh_runner.go:195] Run: crio --version
	I1018 10:33:30.597481  482683 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:33:30.600327  482683 cli_runner.go:164] Run: docker network inspect embed-certs-101897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:33:30.623124  482683 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:33:30.627536  482683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:33:30.645675  482683 kubeadm.go:883] updating cluster {Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:33:30.645792  482683 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:33:30.645870  482683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:33:30.704747  482683 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:33:30.704823  482683 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:33:30.704913  482683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:33:30.756860  482683 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:33:30.756881  482683 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:33:30.756889  482683 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:33:30.756988  482683 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-101897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:33:30.757071  482683 ssh_runner.go:195] Run: crio config
	I1018 10:33:30.879038  482683 cni.go:84] Creating CNI manager for ""
	I1018 10:33:30.879101  482683 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:33:30.879134  482683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:33:30.879190  482683 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-101897 NodeName:embed-certs-101897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:33:30.879354  482683 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-101897"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:33:30.879445  482683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:33:30.891471  482683 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:33:30.891616  482683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:33:30.900164  482683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 10:33:30.919895  482683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:33:30.945046  482683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 10:33:30.958773  482683 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:33:30.962924  482683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:33:30.972512  482683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:31.201095  482683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:33:31.223855  482683 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897 for IP: 192.168.85.2
	I1018 10:33:31.223879  482683 certs.go:195] generating shared ca certs ...
	I1018 10:33:31.223904  482683 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:31.224088  482683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:33:31.224152  482683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:33:31.224165  482683 certs.go:257] generating profile certs ...
	I1018 10:33:31.224262  482683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.key
	I1018 10:33:31.224337  482683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4
	I1018 10:33:31.224388  482683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key
	I1018 10:33:31.224518  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:33:31.224556  482683 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:33:31.224579  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:33:31.224618  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:33:31.224653  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:33:31.224686  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:33:31.224740  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:33:31.225524  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:33:31.275008  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:33:31.309596  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:33:31.343950  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:33:31.373648  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 10:33:31.422471  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:33:31.467546  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:33:31.527108  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:33:31.556020  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:33:31.610845  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:33:31.668508  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:33:31.719749  482683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:33:31.746316  482683 ssh_runner.go:195] Run: openssl version
	I1018 10:33:31.752768  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:33:31.768074  482683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:33:31.774795  482683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:33:31.774915  482683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:33:31.834127  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:33:31.843135  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:33:31.865284  482683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:31.873770  482683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:31.873890  482683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:31.941593  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:33:31.951491  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:33:31.967272  482683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:33:31.973943  482683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:33:31.974087  482683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:33:32.034814  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:33:32.051707  482683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:33:32.056278  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:33:32.109865  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:33:32.169380  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:33:32.260888  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:33:32.373355  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:33:32.554529  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:33:32.799304  482683 kubeadm.go:400] StartCluster: {Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:33:32.799410  482683 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:33:32.799498  482683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:33:32.899605  482683 cri.go:89] found id: "0a5b488299c29fa8c745b6bbd5d7b3db828119f52e047c424ea4b9156c222088"
	I1018 10:33:32.899679  482683 cri.go:89] found id: "ddb705e0f64d66513424efc45237983978c1000f91094a9731d126dd8cab8ac7"
	I1018 10:33:32.899697  482683 cri.go:89] found id: "ea13a5fdbf596d27a2a9bdd7254f8af427b96bdad19fa1221e096954a6b07ec4"
	I1018 10:33:32.899716  482683 cri.go:89] found id: "98749e78e236d9c4ba517df85eb017b3e2daf5eb1d15c7618a96f229e9c048e9"
	I1018 10:33:32.899749  482683 cri.go:89] found id: ""
	I1018 10:33:32.899818  482683 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:33:32.938066  482683 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:33:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:33:32.938182  482683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:33:32.955757  482683 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:33:32.955779  482683 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:33:32.955844  482683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:33:32.976855  482683 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:33:32.977475  482683 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-101897" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:32.977768  482683 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-293333/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-101897" cluster setting kubeconfig missing "embed-certs-101897" context setting]
	I1018 10:33:32.978272  482683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:32.979727  482683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:33:32.999446  482683 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 10:33:32.999483  482683 kubeadm.go:601] duration metric: took 43.697152ms to restartPrimaryControlPlane
	I1018 10:33:32.999499  482683 kubeadm.go:402] duration metric: took 200.200437ms to StartCluster
	I1018 10:33:32.999515  482683 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:32.999584  482683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:33.000963  482683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:33.001267  482683 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:33:33.001787  482683 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:33:33.001871  482683 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-101897"
	I1018 10:33:33.001894  482683 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-101897"
	W1018 10:33:33.001904  482683 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:33:33.001928  482683 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:33:33.002454  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:33.002780  482683 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:33.002849  482683 addons.go:69] Setting dashboard=true in profile "embed-certs-101897"
	I1018 10:33:33.002864  482683 addons.go:238] Setting addon dashboard=true in "embed-certs-101897"
	W1018 10:33:33.002871  482683 addons.go:247] addon dashboard should already be in state true
	I1018 10:33:33.002905  482683 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:33:33.003352  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:33.003774  482683 addons.go:69] Setting default-storageclass=true in profile "embed-certs-101897"
	I1018 10:33:33.003799  482683 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-101897"
	I1018 10:33:33.004046  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:33.008264  482683 out.go:179] * Verifying Kubernetes components...
	I1018 10:33:33.013436  482683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:33.056743  482683 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 10:33:33.060514  482683 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 10:33:33.063728  482683 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:33:33.222178  481899 node_ready.go:49] node "default-k8s-diff-port-715182" is "Ready"
	I1018 10:33:33.222204  481899 node_ready.go:38] duration metric: took 6.82496708s for node "default-k8s-diff-port-715182" to be "Ready" ...
	I1018 10:33:33.222218  481899 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:33:33.222276  481899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:33:36.751692  481899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.360778749s)
	I1018 10:33:36.751755  481899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.244152711s)
	I1018 10:33:36.752017  481899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.915393158s)
	I1018 10:33:36.752152  481899 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.529862232s)
	I1018 10:33:36.752166  481899 api_server.go:72] duration metric: took 10.793988924s to wait for apiserver process to appear ...
	I1018 10:33:36.752172  481899 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:33:36.752187  481899 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1018 10:33:36.755111  481899 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-715182 addons enable metrics-server
	
	I1018 10:33:36.768673  481899 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1018 10:33:36.769843  481899 api_server.go:141] control plane version: v1.34.1
	I1018 10:33:36.769869  481899 api_server.go:131] duration metric: took 17.690256ms to wait for apiserver health ...
	I1018 10:33:36.769879  481899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:33:36.779148  481899 system_pods.go:59] 8 kube-system pods found
	I1018 10:33:36.779185  481899 system_pods.go:61] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:33:36.779195  481899 system_pods.go:61] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:33:36.779202  481899 system_pods.go:61] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:33:36.779209  481899 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:33:36.779216  481899 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:33:36.779224  481899 system_pods.go:61] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:33:36.779232  481899 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:33:36.779238  481899 system_pods.go:61] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Running
	I1018 10:33:36.779246  481899 system_pods.go:74] duration metric: took 9.361339ms to wait for pod list to return data ...
	I1018 10:33:36.779259  481899 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:33:36.786823  481899 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 10:33:36.787732  481899 default_sa.go:45] found service account: "default"
	I1018 10:33:36.787749  481899 default_sa.go:55] duration metric: took 8.484341ms for default service account to be created ...
	I1018 10:33:36.787758  481899 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:33:36.789750  481899 addons.go:514] duration metric: took 10.83124061s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 10:33:36.795714  481899 system_pods.go:86] 8 kube-system pods found
	I1018 10:33:36.795743  481899 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:33:36.795752  481899 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:33:36.795758  481899 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:33:36.795765  481899 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:33:36.795771  481899 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:33:36.795776  481899 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:33:36.795782  481899 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:33:36.795786  481899 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Running
	I1018 10:33:36.795793  481899 system_pods.go:126] duration metric: took 8.029534ms to wait for k8s-apps to be running ...
	I1018 10:33:36.795801  481899 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:33:36.795852  481899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:33:36.824445  481899 system_svc.go:56] duration metric: took 28.634176ms WaitForService to wait for kubelet
	I1018 10:33:36.824487  481899 kubeadm.go:586] duration metric: took 10.866303334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:33:36.824510  481899 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:33:36.827596  481899 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:33:36.827626  481899 node_conditions.go:123] node cpu capacity is 2
	I1018 10:33:36.827641  481899 node_conditions.go:105] duration metric: took 3.123627ms to run NodePressure ...
	I1018 10:33:36.827653  481899 start.go:241] waiting for startup goroutines ...
	I1018 10:33:36.827668  481899 start.go:246] waiting for cluster config update ...
	I1018 10:33:36.827687  481899 start.go:255] writing updated cluster config ...
	I1018 10:33:36.828001  481899 ssh_runner.go:195] Run: rm -f paused
	I1018 10:33:36.837642  481899 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:33:36.842777  481899 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c2sb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:33:33.063728  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 10:33:33.063817  482683 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 10:33:33.063893  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:33.067202  482683 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:33:33.067227  482683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:33:33.067290  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:33.079253  482683 addons.go:238] Setting addon default-storageclass=true in "embed-certs-101897"
	W1018 10:33:33.079281  482683 addons.go:247] addon default-storageclass should already be in state true
	I1018 10:33:33.079304  482683 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:33:33.081600  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:33.105295  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:33.130816  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:33.151950  482683 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:33:33.151978  482683 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:33:33.152044  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:33.184848  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:33.497750  482683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:33:33.557782  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 10:33:33.557810  482683 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 10:33:33.571895  482683 node_ready.go:35] waiting up to 6m0s for node "embed-certs-101897" to be "Ready" ...
	I1018 10:33:33.576971  482683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:33:33.621173  482683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:33:33.641252  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 10:33:33.641276  482683 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 10:33:33.749903  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 10:33:33.749943  482683 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 10:33:33.883454  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 10:33:33.883486  482683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 10:33:34.077898  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 10:33:34.077926  482683 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 10:33:34.139783  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 10:33:34.139832  482683 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 10:33:34.184016  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 10:33:34.184060  482683 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 10:33:34.251244  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 10:33:34.251270  482683 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 10:33:34.309697  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:33:34.309724  482683 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 10:33:34.341803  482683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1018 10:33:38.850113  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:40.852629  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	I1018 10:33:40.895449  482683 node_ready.go:49] node "embed-certs-101897" is "Ready"
	I1018 10:33:40.895483  482683 node_ready.go:38] duration metric: took 7.323544869s for node "embed-certs-101897" to be "Ready" ...
	I1018 10:33:40.895498  482683 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:33:40.895558  482683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:33:41.254617  482683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.677608468s)
	I1018 10:33:43.880727  482683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.259429695s)
	I1018 10:33:44.098399  482683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.756540357s)
	I1018 10:33:44.098609  482683 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.203034004s)
	I1018 10:33:44.098624  482683 api_server.go:72] duration metric: took 11.097322432s to wait for apiserver process to appear ...
	I1018 10:33:44.098631  482683 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:33:44.098661  482683 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:33:44.101588  482683 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-101897 addons enable metrics-server
	
	I1018 10:33:44.104665  482683 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W1018 10:33:42.852885  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:45.352570  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	I1018 10:33:44.107618  482683 addons.go:514] duration metric: took 11.105811606s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1018 10:33:44.121650  482683 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 10:33:44.123518  482683 api_server.go:141] control plane version: v1.34.1
	I1018 10:33:44.123553  482683 api_server.go:131] duration metric: took 24.910168ms to wait for apiserver health ...
	I1018 10:33:44.123563  482683 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:33:44.141349  482683 system_pods.go:59] 8 kube-system pods found
	I1018 10:33:44.141382  482683 system_pods.go:61] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:33:44.141392  482683 system_pods.go:61] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:33:44.141400  482683 system_pods.go:61] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:33:44.141407  482683 system_pods.go:61] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:33:44.141415  482683 system_pods.go:61] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:33:44.141420  482683 system_pods.go:61] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:33:44.141426  482683 system_pods.go:61] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:33:44.141430  482683 system_pods.go:61] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Running
	I1018 10:33:44.141436  482683 system_pods.go:74] duration metric: took 17.868295ms to wait for pod list to return data ...
	I1018 10:33:44.141444  482683 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:33:44.156813  482683 default_sa.go:45] found service account: "default"
	I1018 10:33:44.156835  482683 default_sa.go:55] duration metric: took 15.385076ms for default service account to be created ...
	I1018 10:33:44.156844  482683 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:33:44.244295  482683 system_pods.go:86] 8 kube-system pods found
	I1018 10:33:44.244385  482683 system_pods.go:89] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:33:44.244411  482683 system_pods.go:89] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:33:44.244445  482683 system_pods.go:89] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:33:44.244477  482683 system_pods.go:89] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:33:44.244504  482683 system_pods.go:89] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:33:44.244524  482683 system_pods.go:89] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:33:44.244565  482683 system_pods.go:89] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:33:44.244591  482683 system_pods.go:89] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Running
	I1018 10:33:44.244613  482683 system_pods.go:126] duration metric: took 87.762946ms to wait for k8s-apps to be running ...
	I1018 10:33:44.244634  482683 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:33:44.244736  482683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:33:44.283732  482683 system_svc.go:56] duration metric: took 39.089807ms WaitForService to wait for kubelet
	I1018 10:33:44.283771  482683 kubeadm.go:586] duration metric: took 11.282466848s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:33:44.283791  482683 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:33:44.305270  482683 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:33:44.305327  482683 node_conditions.go:123] node cpu capacity is 2
	I1018 10:33:44.305340  482683 node_conditions.go:105] duration metric: took 21.542984ms to run NodePressure ...
	I1018 10:33:44.305353  482683 start.go:241] waiting for startup goroutines ...
	I1018 10:33:44.305371  482683 start.go:246] waiting for cluster config update ...
	I1018 10:33:44.305389  482683 start.go:255] writing updated cluster config ...
	I1018 10:33:44.305699  482683 ssh_runner.go:195] Run: rm -f paused
	I1018 10:33:44.310610  482683 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:33:44.348302  482683 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hxrmf" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 10:33:46.357202  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:47.352714  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:49.856682  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:48.366121  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:50.854825  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:52.350707  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:54.849101  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:56.856103  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:52.859469  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:55.354714  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:57.356020  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:59.349009  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:34:01.849636  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:59.854180  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:01.854472  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:04.348809  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:34:06.349895  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:34:04.354468  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:06.854197  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	I1018 10:34:07.849359  481899 pod_ready.go:94] pod "coredns-66bc5c9577-c2sb5" is "Ready"
	I1018 10:34:07.849390  481899 pod_ready.go:86] duration metric: took 31.006586317s for pod "coredns-66bc5c9577-c2sb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.852428  481899 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.858394  481899 pod_ready.go:94] pod "etcd-default-k8s-diff-port-715182" is "Ready"
	I1018 10:34:07.858422  481899 pod_ready.go:86] duration metric: took 5.964119ms for pod "etcd-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.860647  481899 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.864996  481899 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-715182" is "Ready"
	I1018 10:34:07.865024  481899 pod_ready.go:86] duration metric: took 4.350645ms for pod "kube-apiserver-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.867184  481899 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:08.048451  481899 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-715182" is "Ready"
	I1018 10:34:08.048526  481899 pod_ready.go:86] duration metric: took 181.314099ms for pod "kube-controller-manager-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:08.247819  481899 pod_ready.go:83] waiting for pod "kube-proxy-5whrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:08.646721  481899 pod_ready.go:94] pod "kube-proxy-5whrp" is "Ready"
	I1018 10:34:08.646752  481899 pod_ready.go:86] duration metric: took 398.903334ms for pod "kube-proxy-5whrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:08.846822  481899 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:09.248114  481899 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-715182" is "Ready"
	I1018 10:34:09.248142  481899 pod_ready.go:86] duration metric: took 401.293608ms for pod "kube-scheduler-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:09.248155  481899 pod_ready.go:40] duration metric: took 32.410477882s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:34:09.323758  481899 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:34:09.327026  481899 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-715182" cluster and "default" namespace by default
	W1018 10:34:08.857867  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:11.354896  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:13.854911  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:16.354459  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:18.854749  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:20.858487  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.493304018Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.501639734Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.501811724Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.501887187Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.505155892Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.505261756Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.505284271Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.508340076Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.508378812Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.508403633Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.511656945Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.511691407Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.218217338Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=232995bc-e26a-4f37-86a4-609759db2b3b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.221476599Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c9a562c5-4f37-4220-bd35-13f441f5b9d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.222652825Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq/dashboard-metrics-scraper" id=3b65d2cd-067e-4762-a6ca-a788b65acfe2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.222900557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.233506444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.234317768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.249735204Z" level=info msg="Created container 58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq/dashboard-metrics-scraper" id=3b65d2cd-067e-4762-a6ca-a788b65acfe2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.25068257Z" level=info msg="Starting container: 58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731" id=a2082faa-d084-4c16-bfe0-7946360898e8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.252445478Z" level=info msg="Started container" PID=1710 containerID=58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq/dashboard-metrics-scraper id=a2082faa-d084-4c16-bfe0-7946360898e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=137778e52673471a78a8658f67e1677f5afb63223135577262855af629642268
	Oct 18 10:34:15 default-k8s-diff-port-715182 conmon[1708]: conmon 58bca56ad6d8da6533a1 <ninfo>: container 1710 exited with status 1
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.759522187Z" level=info msg="Removing container: c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48" id=19c2ff2e-a91a-478a-9911-2cf5d23adc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.769865588Z" level=info msg="Error loading conmon cgroup of container c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48: cgroup deleted" id=19c2ff2e-a91a-478a-9911-2cf5d23adc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.775884765Z" level=info msg="Removed container c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq/dashboard-metrics-scraper" id=19c2ff2e-a91a-478a-9911-2cf5d23adc9a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	58bca56ad6d8d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   2                   137778e526734       dashboard-metrics-scraper-6ffb444bf9-qcmdq             kubernetes-dashboard
	3927daefbc902       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   cacb30a2dd01d       storage-provisioner                                    kube-system
	fbcfeb81c2450       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago      Running             kubernetes-dashboard        0                   96da69e626d84       kubernetes-dashboard-855c9754f9-jqgfc                  kubernetes-dashboard
	7976b285f29ed       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   11543b640b56d       busybox                                                default
	8be7e98a5e3a3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   884a4ecdf436e       kube-proxy-5whrp                                       kube-system
	1e44f1527d991       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   cacb30a2dd01d       storage-provisioner                                    kube-system
	5f57f3b79a652       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   551d67a32a043       coredns-66bc5c9577-c2sb5                               kube-system
	826c12b3cbdbb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   40db764fe695a       kindnet-zd5md                                          kube-system
	8ac924f2c8ba4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   9b0bfaa1a9fa8       kube-apiserver-default-k8s-diff-port-715182            kube-system
	a31ff6775bd9d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   00452da05a177       kube-scheduler-default-k8s-diff-port-715182            kube-system
	dfb7c0f4f545b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   754a81ee0ab85       etcd-default-k8s-diff-port-715182                      kube-system
	5e58508b5c574       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   bc67afdd1faac       kube-controller-manager-default-k8s-diff-port-715182   kube-system
	
	
	==> coredns [5f57f3b79a65261acd514378b0cfe0de5a23d594bd4cb2d6e4f39b8be06c40eb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57004 - 48345 "HINFO IN 6760024403817757506.6651131778930793649. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027111461s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-715182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-715182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=default-k8s-diff-port-715182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_32_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:32:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-715182
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:34:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:34:04 +0000   Sat, 18 Oct 2025 10:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:34:04 +0000   Sat, 18 Oct 2025 10:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:34:04 +0000   Sat, 18 Oct 2025 10:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:34:04 +0000   Sat, 18 Oct 2025 10:32:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-715182
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c53d6dae-7a14-4045-ac49-41d96155b5e4
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-c2sb5                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m14s
	  kube-system                 etcd-default-k8s-diff-port-715182                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m20s
	  kube-system                 kindnet-zd5md                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m15s
	  kube-system                 kube-apiserver-default-k8s-diff-port-715182             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-715182    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-5whrp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-default-k8s-diff-port-715182             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qcmdq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jqgfc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m13s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m20s                  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m20s                  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s                  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m15s                  node-controller  Node default-k8s-diff-port-715182 event: Registered Node default-k8s-diff-port-715182 in Controller
	  Normal   NodeReady                94s                    kubelet          Node default-k8s-diff-port-715182 status is now: NodeReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           45s                    node-controller  Node default-k8s-diff-port-715182 event: Registered Node default-k8s-diff-port-715182 in Controller
	
	
	==> dmesg <==
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dfb7c0f4f545b605ddeb04490e8932c6ffb5e4afea7c622fa1cf23b6e8f53ed7] <==
	{"level":"warn","ts":"2025-10-18T10:33:29.414972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.457363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.509806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.535182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.624290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.661923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.705939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.777463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.838111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.901661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.941953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.985442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.060117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.147784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.186151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.257670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.279680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.296674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.349890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.413496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.479846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.530536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.568703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.598717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.717795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:34:24 up  2:16,  0 user,  load average: 4.52, 4.32, 3.30
	Linux default-k8s-diff-port-715182 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [826c12b3cbdbbe27f0afcdc885a68c29c788841c412a5f5620cccaaa4752469b] <==
	I1018 10:33:34.123899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:33:34.163768       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:33:34.163891       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:33:34.163904       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:33:34.163919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:33:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:33:34.486475       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:33:34.486514       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:33:34.486523       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:33:34.528976       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:34:04.480676       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 10:34:04.486822       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:34:04.486767       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 10:34:04.487180       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 10:34:06.086991       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:34:06.087040       1 metrics.go:72] Registering metrics
	I1018 10:34:06.087109       1 controller.go:711] "Syncing nftables rules"
	I1018 10:34:14.480453       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:34:14.480514       1 main.go:301] handling current node
	I1018 10:34:24.478377       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:34:24.478418       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8ac924f2c8ba493d59bc3b60efaa16f38faf443aab37d62f891a1809134404cc] <==
	I1018 10:33:33.317992       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 10:33:33.318061       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:33:33.349095       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:33:33.351402       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 10:33:33.351421       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:33:33.351500       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 10:33:33.351542       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 10:33:33.352147       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 10:33:33.366299       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 10:33:33.366648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 10:33:33.385300       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:33:33.390095       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 10:33:33.436829       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:33:33.488215       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:33:33.651479       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1018 10:33:33.735988       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 10:33:35.404113       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:33:35.717837       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:33:35.941753       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:33:36.074363       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:33:36.360795       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.152.119"}
	I1018 10:33:36.425406       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.195.75"}
	I1018 10:33:39.204834       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:33:39.301389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:33:39.660145       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5e58508b5c574d2cb44d4c48ef46f9795889437a9e76ee1a3215fd6336add58e] <==
	I1018 10:33:39.138860       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:33:39.140934       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:33:39.142115       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:33:39.148449       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 10:33:39.150685       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 10:33:39.153966       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 10:33:39.154166       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 10:33:39.155375       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:33:39.157547       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 10:33:39.158753       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:33:39.159870       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:33:39.172127       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 10:33:39.174412       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 10:33:39.176729       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 10:33:39.189993       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:33:39.190169       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:33:39.190203       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:33:39.190231       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:33:39.190318       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 10:33:39.190418       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 10:33:39.190512       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-715182"
	I1018 10:33:39.190582       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 10:33:39.190656       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 10:33:39.190699       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 10:33:39.200950       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [8be7e98a5e3a38ee08d839bf119f903facc309086fe2401359c1cda7829fdc9a] <==
	I1018 10:33:36.240108       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:33:36.503154       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:33:36.611052       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:33:36.611160       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:33:36.611274       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:33:36.825165       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:33:36.825347       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:33:36.849147       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:33:36.849612       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:33:36.849812       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:33:36.867561       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:33:36.867657       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:33:36.867793       1 config.go:200] "Starting service config controller"
	I1018 10:33:36.867832       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:33:36.867879       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:33:36.867921       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:33:36.873543       1 config.go:309] "Starting node config controller"
	I1018 10:33:36.875536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:33:36.875634       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:33:36.969147       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 10:33:36.969273       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:33:36.969310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a31ff6775bd9d5e70b70f707aacc8fb2ae23fea962bd975f954a8c39da5690e9] <==
	I1018 10:33:31.193123       1 serving.go:386] Generated self-signed cert in-memory
	I1018 10:33:35.906254       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:33:35.906375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:33:35.954673       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:33:35.954786       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 10:33:35.954810       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 10:33:35.954837       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:33:35.957030       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:33:35.957044       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:33:35.957063       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:33:35.957069       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:33:36.073577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:33:36.073693       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 10:33:36.073779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:33:34 default-k8s-diff-port-715182 kubelet[775]: W1018 10:33:34.305416     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/crio-11543b640b56df3d566561360d7736f3ec7e77a43b4c7caa13de6d5125d69f44 WatchSource:0}: Error finding container 11543b640b56df3d566561360d7736f3ec7e77a43b4c7caa13de6d5125d69f44: Status 404 returned error can't find the container with id 11543b640b56df3d566561360d7736f3ec7e77a43b4c7caa13de6d5125d69f44
	Oct 18 10:33:39 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:39.747446     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c554b1db-a745-4da6-9d1f-3d4e2759b03e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jqgfc\" (UID: \"c554b1db-a745-4da6-9d1f-3d4e2759b03e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqgfc"
	Oct 18 10:33:39 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:39.748069     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4thv\" (UniqueName: \"kubernetes.io/projected/c554b1db-a745-4da6-9d1f-3d4e2759b03e-kube-api-access-c4thv\") pod \"kubernetes-dashboard-855c9754f9-jqgfc\" (UID: \"c554b1db-a745-4da6-9d1f-3d4e2759b03e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqgfc"
	Oct 18 10:33:39 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:39.748242     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/432ee2d6-624c-468c-bde9-bf97729e1988-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qcmdq\" (UID: \"432ee2d6-624c-468c-bde9-bf97729e1988\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq"
	Oct 18 10:33:39 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:39.748382     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h2d8\" (UniqueName: \"kubernetes.io/projected/432ee2d6-624c-468c-bde9-bf97729e1988-kube-api-access-8h2d8\") pod \"dashboard-metrics-scraper-6ffb444bf9-qcmdq\" (UID: \"432ee2d6-624c-468c-bde9-bf97729e1988\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq"
	Oct 18 10:33:40 default-k8s-diff-port-715182 kubelet[775]: W1018 10:33:40.085513     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/crio-137778e52673471a78a8658f67e1677f5afb63223135577262855af629642268 WatchSource:0}: Error finding container 137778e52673471a78a8658f67e1677f5afb63223135577262855af629642268: Status 404 returned error can't find the container with id 137778e52673471a78a8658f67e1677f5afb63223135577262855af629642268
	Oct 18 10:33:48 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:48.704597     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqgfc" podStartSLOduration=1.755749665 podStartE2EDuration="9.704580867s" podCreationTimestamp="2025-10-18 10:33:39 +0000 UTC" firstStartedPulling="2025-10-18 10:33:40.093663459 +0000 UTC m=+15.040080545" lastFinishedPulling="2025-10-18 10:33:48.042494661 +0000 UTC m=+22.988911747" observedRunningTime="2025-10-18 10:33:48.704105292 +0000 UTC m=+23.650522386" watchObservedRunningTime="2025-10-18 10:33:48.704580867 +0000 UTC m=+23.650997961"
	Oct 18 10:33:54 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:54.699580     775 scope.go:117] "RemoveContainer" containerID="f2b87e2fd82851ba776e7c202eda2438401fbb245d0d1cf1badc69d1c52efb18"
	Oct 18 10:33:55 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:55.705937     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:33:55 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:55.713027     775 scope.go:117] "RemoveContainer" containerID="f2b87e2fd82851ba776e7c202eda2438401fbb245d0d1cf1badc69d1c52efb18"
	Oct 18 10:33:55 default-k8s-diff-port-715182 kubelet[775]: E1018 10:33:55.727790     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:33:56 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:56.707324     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:33:56 default-k8s-diff-port-715182 kubelet[775]: E1018 10:33:56.707907     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:34:00 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:00.027188     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:34:00 default-k8s-diff-port-715182 kubelet[775]: E1018 10:34:00.028009     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:34:05 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:05.730033     775 scope.go:117] "RemoveContainer" containerID="1e44f1527d99187a2cfb9fe74a914deb93372eeeee161687d7f9c60126af645c"
	Oct 18 10:34:15 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:15.217477     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:34:15 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:15.757480     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:34:15 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:15.757828     775 scope.go:117] "RemoveContainer" containerID="58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731"
	Oct 18 10:34:15 default-k8s-diff-port-715182 kubelet[775]: E1018 10:34:15.758068     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:34:20 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:20.014706     775 scope.go:117] "RemoveContainer" containerID="58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731"
	Oct 18 10:34:20 default-k8s-diff-port-715182 kubelet[775]: E1018 10:34:20.014914     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:34:21 default-k8s-diff-port-715182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:34:21 default-k8s-diff-port-715182 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:34:21 default-k8s-diff-port-715182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [fbcfeb81c2450f4d9bcac716f9e0712d078fee0206275c2807833c4628397eaa] <==
	2025/10/18 10:33:48 Starting overwatch
	2025/10/18 10:33:48 Using namespace: kubernetes-dashboard
	2025/10/18 10:33:48 Using in-cluster config to connect to apiserver
	2025/10/18 10:33:48 Using secret token for csrf signing
	2025/10/18 10:33:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 10:33:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 10:33:48 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 10:33:48 Generating JWE encryption key
	2025/10/18 10:33:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 10:33:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 10:33:48 Initializing JWE encryption key from synchronized object
	2025/10/18 10:33:48 Creating in-cluster Sidecar client
	2025/10/18 10:33:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:33:48 Serving insecurely on HTTP port: 9090
	2025/10/18 10:34:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1e44f1527d99187a2cfb9fe74a914deb93372eeeee161687d7f9c60126af645c] <==
	I1018 10:33:35.575074       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 10:34:05.584404       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3927daefbc902bf8510c1cdc5663cd63c2b5f4102fb088ba05776992e0758eed] <==
	I1018 10:34:05.787765       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 10:34:05.800053       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:34:05.800102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 10:34:05.803053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:09.260100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:13.521052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:17.119368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:20.173590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:23.196732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:23.204037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:34:23.204226       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:34:23.204321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d2469c43-20ab-4e8a-ab93-03156d0280d3", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-715182_ac8c4e63-d390-472c-8b81-451c630f23eb became leader
	I1018 10:34:23.204545       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715182_ac8c4e63-d390-472c-8b81-451c630f23eb!
	W1018 10:34:23.220415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:23.228837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:34:23.305699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715182_ac8c4e63-d390-472c-8b81-451c630f23eb!
	W1018 10:34:25.232716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:25.240866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182: exit status 2 (400.395126ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-715182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-715182
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-715182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f",
	        "Created": "2025-10-18T10:31:31.395284928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482021,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:33:17.333086669Z",
	            "FinishedAt": "2025-10-18T10:33:16.546909075Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/hosts",
	        "LogPath": "/var/lib/docker/containers/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f-json.log",
	        "Name": "/default-k8s-diff-port-715182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-715182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-715182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f",
	                "LowerDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ff6ee3c921ec4dcd2c6886a96b742acee0f82f430b6751112e705bca4f05201/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-715182",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-715182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-715182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-715182",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-715182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6dd78a606a67c4be0ef2b59a56eb7bde5512908426b68f7f0fc78eb23724df82",
	            "SandboxKey": "/var/run/docker/netns/6dd78a606a67",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-715182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:29:ad:a3:a0:1b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "788491100ff23209b4a58b30f7bb3bc0737bdeee77d901da545d647f4fa241c9",
	                    "EndpointID": "3d4432009ac5b6cdcc6b8a93c1c8cf04bccf9271f326a900aa4200159a033d85",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-715182",
	                        "2afd5447007b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182: exit status 2 (450.393662ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-715182 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-715182 logs -n 25: (1.341072663s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-233372 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-233372                                                                                                                                                                                                                        │ cert-options-233372          │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:28 UTC │ 18 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-309062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │                     │
	│ stop    │ -p old-k8s-version-309062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:29 UTC │ 18 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-309062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-309062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-309062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-733799                                                                                                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-715182 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-101897 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-715182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-101897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-715182 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:33:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:33:22.456245  482683 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:33:22.456379  482683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:33:22.456390  482683 out.go:374] Setting ErrFile to fd 2...
	I1018 10:33:22.456395  482683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:33:22.456670  482683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:33:22.457027  482683 out.go:368] Setting JSON to false
	I1018 10:33:22.457986  482683 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8153,"bootTime":1760775450,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:33:22.458053  482683 start.go:141] virtualization:  
	I1018 10:33:22.462892  482683 out.go:179] * [embed-certs-101897] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:33:22.466048  482683 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:33:22.466086  482683 notify.go:220] Checking for updates...
	I1018 10:33:22.471842  482683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:33:22.474691  482683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:22.477622  482683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:33:22.480468  482683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:33:22.483387  482683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:33:22.486732  482683 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:22.487347  482683 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:33:22.522063  482683 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:33:22.522178  482683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:33:22.604420  482683 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 10:33:22.595338658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:33:22.604528  482683 docker.go:318] overlay module found
	I1018 10:33:22.607717  482683 out.go:179] * Using the docker driver based on existing profile
	I1018 10:33:22.610589  482683 start.go:305] selected driver: docker
	I1018 10:33:22.610612  482683 start.go:925] validating driver "docker" against &{Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:33:22.610721  482683 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:33:22.611415  482683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:33:22.697919  482683 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 10:33:22.68877142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:33:22.698293  482683 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:33:22.698327  482683 cni.go:84] Creating CNI manager for ""
	I1018 10:33:22.698384  482683 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:33:22.698433  482683 start.go:349] cluster config:
	{Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:33:22.703366  482683 out.go:179] * Starting "embed-certs-101897" primary control-plane node in "embed-certs-101897" cluster
	I1018 10:33:22.710028  482683 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:33:22.713990  482683 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:33:22.717823  482683 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:33:22.717891  482683 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:33:22.717905  482683 cache.go:58] Caching tarball of preloaded images
	I1018 10:33:22.717926  482683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:33:22.717994  482683 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:33:22.718002  482683 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:33:22.718111  482683 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json ...
	I1018 10:33:22.744876  482683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:33:22.744895  482683 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:33:22.744909  482683 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:33:22.744930  482683 start.go:360] acquireMachinesLock for embed-certs-101897: {Name:mkdf4f50051bf510e5fec7789d20200884d252f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:33:22.744988  482683 start.go:364] duration metric: took 37.186µs to acquireMachinesLock for "embed-certs-101897"
	I1018 10:33:22.745007  482683 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:33:22.745012  482683 fix.go:54] fixHost starting: 
	I1018 10:33:22.745525  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:22.771013  482683 fix.go:112] recreateIfNeeded on embed-certs-101897: state=Stopped err=<nil>
	W1018 10:33:22.771040  482683 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 10:33:22.113078  481899 provision.go:177] copyRemoteCerts
	I1018 10:33:22.113246  481899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:33:22.113313  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:22.134795  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:22.264397  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:33:22.287573  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 10:33:22.310509  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:33:22.333966  481899 provision.go:87] duration metric: took 1.216422883s to configureAuth
	I1018 10:33:22.333987  481899 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:33:22.334184  481899 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:22.334288  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:22.352970  481899 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:22.353365  481899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1018 10:33:22.353389  481899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:33:22.740883  481899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:33:22.740903  481899 machine.go:96] duration metric: took 5.133739169s to provisionDockerMachine
	I1018 10:33:22.740914  481899 start.go:293] postStartSetup for "default-k8s-diff-port-715182" (driver="docker")
	I1018 10:33:22.740925  481899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:33:22.740997  481899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:33:22.741039  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:22.770861  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:22.879271  481899 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:33:22.885964  481899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:33:22.886048  481899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:33:22.886083  481899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:33:22.886166  481899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:33:22.886283  481899 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:33:22.886447  481899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:33:22.905944  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:33:22.938690  481899 start.go:296] duration metric: took 197.760489ms for postStartSetup
	I1018 10:33:22.938805  481899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:33:22.938880  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:22.966493  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:23.072542  481899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:33:23.080512  481899 fix.go:56] duration metric: took 5.800716249s for fixHost
	I1018 10:33:23.080614  481899 start.go:83] releasing machines lock for "default-k8s-diff-port-715182", held for 5.800834535s
	I1018 10:33:23.080765  481899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-715182
	I1018 10:33:23.114756  481899 ssh_runner.go:195] Run: cat /version.json
	I1018 10:33:23.114805  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:23.115048  481899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:33:23.115110  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:23.181799  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:23.189316  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:23.309772  481899 ssh_runner.go:195] Run: systemctl --version
	I1018 10:33:23.416534  481899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:33:23.483918  481899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:33:23.489842  481899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:33:23.489979  481899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:33:23.502035  481899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:33:23.502110  481899 start.go:495] detecting cgroup driver to use...
	I1018 10:33:23.502157  481899 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:33:23.502240  481899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:33:23.519500  481899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:33:23.537748  481899 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:33:23.537811  481899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:33:23.559022  481899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:33:23.578654  481899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:33:23.775065  481899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:33:23.939442  481899 docker.go:234] disabling docker service ...
	I1018 10:33:23.939511  481899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:33:23.961311  481899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:33:23.978157  481899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:33:24.100733  481899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:33:24.221075  481899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:33:24.235656  481899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:33:24.251306  481899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:33:24.251393  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.260704  481899 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:33:24.260774  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.269946  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.279192  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.288214  481899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:33:24.296532  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.305777  481899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.314395  481899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:24.323060  481899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:33:24.330682  481899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:33:24.338036  481899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:24.450500  481899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:33:24.592261  481899 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:33:24.592331  481899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:33:24.596623  481899 start.go:563] Will wait 60s for crictl version
	I1018 10:33:24.596703  481899 ssh_runner.go:195] Run: which crictl
	I1018 10:33:24.600587  481899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:33:24.626336  481899 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:33:24.626428  481899 ssh_runner.go:195] Run: crio --version
	I1018 10:33:24.653313  481899 ssh_runner.go:195] Run: crio --version
	I1018 10:33:24.687809  481899 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:33:24.690560  481899 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-715182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:33:24.707135  481899 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:33:24.711304  481899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:33:24.721222  481899 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-715182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:33:24.721339  481899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:33:24.721406  481899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:33:24.759210  481899 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:33:24.759230  481899 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:33:24.759286  481899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:33:24.788102  481899 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:33:24.788140  481899 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:33:24.788148  481899 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1018 10:33:24.788254  481899 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-715182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:33:24.788336  481899 ssh_runner.go:195] Run: crio config
	I1018 10:33:24.858184  481899 cni.go:84] Creating CNI manager for ""
	I1018 10:33:24.858205  481899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:33:24.858228  481899 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:33:24.858250  481899 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-715182 NodeName:default-k8s-diff-port-715182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:33:24.858377  481899 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-715182"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:33:24.858457  481899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:33:24.866097  481899 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:33:24.866215  481899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:33:24.873732  481899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 10:33:24.886358  481899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:33:24.898994  481899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 10:33:24.911793  481899 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:33:24.915471  481899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:33:24.925153  481899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:25.036261  481899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:33:25.056632  481899 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182 for IP: 192.168.76.2
	I1018 10:33:25.056655  481899 certs.go:195] generating shared ca certs ...
	I1018 10:33:25.056672  481899 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:25.056868  481899 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:33:25.056942  481899 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:33:25.056957  481899 certs.go:257] generating profile certs ...
	I1018 10:33:25.057068  481899 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.key
	I1018 10:33:25.057154  481899 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key.7b193c3d
	I1018 10:33:25.057289  481899 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key
	I1018 10:33:25.057451  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:33:25.057496  481899 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:33:25.057611  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:33:25.057648  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:33:25.057709  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:33:25.057739  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:33:25.057811  481899 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:33:25.058577  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:33:25.084425  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:33:25.109005  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:33:25.130853  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:33:25.150248  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 10:33:25.169063  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:33:25.195042  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:33:25.218873  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:33:25.254355  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:33:25.276479  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:33:25.296500  481899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:33:25.316253  481899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:33:25.334949  481899 ssh_runner.go:195] Run: openssl version
	I1018 10:33:25.341688  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:33:25.350932  481899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:33:25.354937  481899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:33:25.355037  481899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:33:25.403384  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:33:25.411970  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:33:25.420939  481899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:33:25.424799  481899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:33:25.424901  481899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:33:25.466286  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:33:25.474363  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:33:25.482886  481899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:25.486887  481899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:25.486959  481899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:25.530152  481899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:33:25.538336  481899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:33:25.542278  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:33:25.583956  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:33:25.625718  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:33:25.667976  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:33:25.711901  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:33:25.768834  481899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:33:25.839983  481899 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-715182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-715182 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:33:25.840148  481899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:33:25.840245  481899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:33:25.896365  481899 cri.go:89] found id: "8ac924f2c8ba493d59bc3b60efaa16f38faf443aab37d62f891a1809134404cc"
	I1018 10:33:25.896388  481899 cri.go:89] found id: "a31ff6775bd9d5e70b70f707aacc8fb2ae23fea962bd975f954a8c39da5690e9"
	I1018 10:33:25.896402  481899 cri.go:89] found id: "dfb7c0f4f545b605ddeb04490e8932c6ffb5e4afea7c622fa1cf23b6e8f53ed7"
	I1018 10:33:25.896406  481899 cri.go:89] found id: "5e58508b5c574d2cb44d4c48ef46f9795889437a9e76ee1a3215fd6336add58e"
	I1018 10:33:25.896410  481899 cri.go:89] found id: ""
	I1018 10:33:25.896461  481899 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:33:25.916166  481899 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:33:25Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:33:25.916251  481899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:33:25.928022  481899 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:33:25.928043  481899 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:33:25.928095  481899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:33:25.940307  481899 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:33:25.940747  481899 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-715182" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:25.940850  481899 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-293333/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-715182" cluster setting kubeconfig missing "default-k8s-diff-port-715182" context setting]
	I1018 10:33:25.941118  481899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:25.942671  481899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:33:25.957216  481899 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 10:33:25.957262  481899 kubeadm.go:601] duration metric: took 29.201389ms to restartPrimaryControlPlane
	I1018 10:33:25.957272  481899 kubeadm.go:402] duration metric: took 117.29951ms to StartCluster
	I1018 10:33:25.957287  481899 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:25.957350  481899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:25.957945  481899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:25.958147  481899 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:33:25.958445  481899 config.go:182] Loaded profile config "default-k8s-diff-port-715182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:25.958495  481899 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:33:25.958560  481899 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-715182"
	I1018 10:33:25.958574  481899 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-715182"
	W1018 10:33:25.958583  481899 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:33:25.958652  481899 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:33:25.958603  481899 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-715182"
	I1018 10:33:25.958704  481899 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-715182"
	W1018 10:33:25.958710  481899 addons.go:247] addon dashboard should already be in state true
	I1018 10:33:25.958726  481899 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:33:25.959184  481899 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:33:25.958610  481899 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-715182"
	I1018 10:33:25.959724  481899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-715182"
	I1018 10:33:25.959981  481899 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:33:25.960479  481899 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:33:25.966725  481899 out.go:179] * Verifying Kubernetes components...
	I1018 10:33:25.970478  481899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:26.028767  481899 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-715182"
	W1018 10:33:26.028798  481899 addons.go:247] addon default-storageclass should already be in state true
	I1018 10:33:26.028825  481899 host.go:66] Checking if "default-k8s-diff-port-715182" exists ...
	I1018 10:33:26.029305  481899 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-715182 --format={{.State.Status}}
	I1018 10:33:26.031048  481899 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 10:33:26.031222  481899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:33:26.035023  481899 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 10:33:26.035129  481899 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:33:26.035139  481899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:33:26.035205  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:26.038208  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 10:33:26.038264  481899 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 10:33:26.038340  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:26.078981  481899 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:33:26.079007  481899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:33:26.079073  481899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-715182
	I1018 10:33:26.093295  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:26.113394  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:26.135162  481899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/default-k8s-diff-port-715182/id_rsa Username:docker}
	I1018 10:33:26.336293  481899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:33:26.390838  481899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:33:26.397133  481899 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-715182" to be "Ready" ...
	I1018 10:33:26.414729  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 10:33:26.414765  481899 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 10:33:26.507584  481899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:33:26.557322  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 10:33:26.557345  481899 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 10:33:26.612290  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 10:33:26.612312  481899 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 10:33:26.683498  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 10:33:26.683516  481899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 10:33:26.734551  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 10:33:26.734572  481899 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 10:33:26.762925  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 10:33:26.762949  481899 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 10:33:26.780941  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 10:33:26.780963  481899 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 10:33:26.797730  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 10:33:26.797750  481899 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 10:33:26.819334  481899 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:33:26.819355  481899 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 10:33:26.836598  481899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:33:22.774598  482683 out.go:252] * Restarting existing docker container for "embed-certs-101897" ...
	I1018 10:33:22.774683  482683 cli_runner.go:164] Run: docker start embed-certs-101897
	I1018 10:33:23.100064  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:23.155129  482683 kic.go:430] container "embed-certs-101897" state is running.
	I1018 10:33:23.155518  482683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:33:23.201513  482683 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/config.json ...
	I1018 10:33:23.201765  482683 machine.go:93] provisionDockerMachine start ...
	I1018 10:33:23.201832  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:23.240040  482683 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:23.240381  482683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1018 10:33:23.240396  482683 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:33:23.241146  482683 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 10:33:26.432921  482683 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-101897
	
	I1018 10:33:26.433002  482683 ubuntu.go:182] provisioning hostname "embed-certs-101897"
	I1018 10:33:26.433097  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:26.458836  482683 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:26.459144  482683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1018 10:33:26.459155  482683 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-101897 && echo "embed-certs-101897" | sudo tee /etc/hostname
	I1018 10:33:26.682618  482683 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-101897
	
	I1018 10:33:26.682702  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:26.710846  482683 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:26.711167  482683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1018 10:33:26.711189  482683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-101897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-101897/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-101897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:33:26.901953  482683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:33:26.902019  482683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:33:26.902063  482683 ubuntu.go:190] setting up certificates
	I1018 10:33:26.902102  482683 provision.go:84] configureAuth start
	I1018 10:33:26.902181  482683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:33:26.933392  482683 provision.go:143] copyHostCerts
	I1018 10:33:26.933456  482683 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:33:26.933473  482683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:33:26.933554  482683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:33:26.933653  482683 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:33:26.933659  482683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:33:26.933688  482683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:33:26.933774  482683 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:33:26.933779  482683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:33:26.933803  482683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:33:26.933847  482683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.embed-certs-101897 san=[127.0.0.1 192.168.85.2 embed-certs-101897 localhost minikube]
	I1018 10:33:27.472247  482683 provision.go:177] copyRemoteCerts
	I1018 10:33:27.472314  482683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:33:27.472361  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:27.490593  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:27.602287  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:33:27.631241  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 10:33:27.663225  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:33:27.696386  482683 provision.go:87] duration metric: took 794.244039ms to configureAuth
	I1018 10:33:27.696416  482683 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:33:27.696608  482683 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:27.696721  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:27.719751  482683 main.go:141] libmachine: Using SSH client type: native
	I1018 10:33:27.720075  482683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I1018 10:33:27.720095  482683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:33:28.158756  482683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:33:28.158824  482683 machine.go:96] duration metric: took 4.957048428s to provisionDockerMachine
	I1018 10:33:28.158875  482683 start.go:293] postStartSetup for "embed-certs-101897" (driver="docker")
	I1018 10:33:28.158913  482683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:33:28.158993  482683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:33:28.159058  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:28.189382  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:28.322365  482683 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:33:28.326251  482683 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:33:28.326276  482683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:33:28.326287  482683 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:33:28.326338  482683 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:33:28.326409  482683 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:33:28.326511  482683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:33:28.344601  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:33:28.378249  482683 start.go:296] duration metric: took 219.33154ms for postStartSetup
	I1018 10:33:28.378347  482683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:33:28.378423  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:28.412022  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:28.536313  482683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:33:28.542363  482683 fix.go:56] duration metric: took 5.797343135s for fixHost
	I1018 10:33:28.542390  482683 start.go:83] releasing machines lock for "embed-certs-101897", held for 5.797394359s
	I1018 10:33:28.542459  482683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-101897
	I1018 10:33:28.570883  482683 ssh_runner.go:195] Run: cat /version.json
	I1018 10:33:28.570955  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:28.571197  482683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:33:28.571252  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:28.590811  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:28.613228  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:28.721821  482683 ssh_runner.go:195] Run: systemctl --version
	I1018 10:33:28.843430  482683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:33:28.919516  482683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:33:28.930892  482683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:33:28.930977  482683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:33:28.939965  482683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:33:28.939997  482683 start.go:495] detecting cgroup driver to use...
	I1018 10:33:28.940030  482683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:33:28.940120  482683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:33:28.968096  482683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:33:28.991338  482683 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:33:28.991409  482683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:33:29.007646  482683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:33:29.023331  482683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:33:29.234650  482683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:33:29.415972  482683 docker.go:234] disabling docker service ...
	I1018 10:33:29.416035  482683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:33:29.437858  482683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:33:29.453594  482683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:33:29.681605  482683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:33:29.877769  482683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:33:29.903400  482683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:33:29.931788  482683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:33:29.931896  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.941292  482683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:33:29.941439  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.951039  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.960455  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.969884  482683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:33:29.978614  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.988202  482683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:29.997485  482683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:33:30.006926  482683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:33:30.017623  482683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:33:30.028432  482683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:30.245686  482683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:33:30.411266  482683 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:33:30.411372  482683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:33:30.420317  482683 start.go:563] Will wait 60s for crictl version
	I1018 10:33:30.420394  482683 ssh_runner.go:195] Run: which crictl
	I1018 10:33:30.424481  482683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:33:30.471601  482683 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:33:30.471710  482683 ssh_runner.go:195] Run: crio --version
	I1018 10:33:30.538034  482683 ssh_runner.go:195] Run: crio --version
	I1018 10:33:30.597481  482683 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:33:30.600327  482683 cli_runner.go:164] Run: docker network inspect embed-certs-101897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:33:30.623124  482683 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:33:30.627536  482683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:33:30.645675  482683 kubeadm.go:883] updating cluster {Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:33:30.645792  482683 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:33:30.645870  482683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:33:30.704747  482683 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:33:30.704823  482683 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:33:30.704913  482683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:33:30.756860  482683 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:33:30.756881  482683 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:33:30.756889  482683 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:33:30.756988  482683 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-101897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:33:30.757071  482683 ssh_runner.go:195] Run: crio config
	I1018 10:33:30.879038  482683 cni.go:84] Creating CNI manager for ""
	I1018 10:33:30.879101  482683 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:33:30.879134  482683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:33:30.879190  482683 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-101897 NodeName:embed-certs-101897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:33:30.879354  482683 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-101897"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:33:30.879445  482683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:33:30.891471  482683 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:33:30.891616  482683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:33:30.900164  482683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 10:33:30.919895  482683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:33:30.945046  482683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 10:33:30.958773  482683 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:33:30.962924  482683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:33:30.972512  482683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:31.201095  482683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:33:31.223855  482683 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897 for IP: 192.168.85.2
	I1018 10:33:31.223879  482683 certs.go:195] generating shared ca certs ...
	I1018 10:33:31.223904  482683 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:31.224088  482683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:33:31.224152  482683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:33:31.224165  482683 certs.go:257] generating profile certs ...
	I1018 10:33:31.224262  482683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/client.key
	I1018 10:33:31.224337  482683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key.cf2721a4
	I1018 10:33:31.224388  482683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key
	I1018 10:33:31.224518  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:33:31.224556  482683 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:33:31.224579  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:33:31.224618  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:33:31.224653  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:33:31.224686  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:33:31.224740  482683 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:33:31.225524  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:33:31.275008  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:33:31.309596  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:33:31.343950  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:33:31.373648  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 10:33:31.422471  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:33:31.467546  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:33:31.527108  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/embed-certs-101897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:33:31.556020  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:33:31.610845  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:33:31.668508  482683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:33:31.719749  482683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:33:31.746316  482683 ssh_runner.go:195] Run: openssl version
	I1018 10:33:31.752768  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:33:31.768074  482683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:33:31.774795  482683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:33:31.774915  482683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:33:31.834127  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:33:31.843135  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:33:31.865284  482683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:31.873770  482683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:31.873890  482683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:33:31.941593  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:33:31.951491  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:33:31.967272  482683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:33:31.973943  482683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:33:31.974087  482683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:33:32.034814  482683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:33:32.051707  482683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:33:32.056278  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:33:32.109865  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:33:32.169380  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:33:32.260888  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:33:32.373355  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:33:32.554529  482683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:33:32.799304  482683 kubeadm.go:400] StartCluster: {Name:embed-certs-101897 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-101897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:33:32.799410  482683 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:33:32.799498  482683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:33:32.899605  482683 cri.go:89] found id: "0a5b488299c29fa8c745b6bbd5d7b3db828119f52e047c424ea4b9156c222088"
	I1018 10:33:32.899679  482683 cri.go:89] found id: "ddb705e0f64d66513424efc45237983978c1000f91094a9731d126dd8cab8ac7"
	I1018 10:33:32.899697  482683 cri.go:89] found id: "ea13a5fdbf596d27a2a9bdd7254f8af427b96bdad19fa1221e096954a6b07ec4"
	I1018 10:33:32.899716  482683 cri.go:89] found id: "98749e78e236d9c4ba517df85eb017b3e2daf5eb1d15c7618a96f229e9c048e9"
	I1018 10:33:32.899749  482683 cri.go:89] found id: ""
	I1018 10:33:32.899818  482683 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:33:32.938066  482683 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:33:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:33:32.938182  482683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:33:32.955757  482683 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:33:32.955779  482683 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:33:32.955844  482683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:33:32.976855  482683 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:33:32.977475  482683 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-101897" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:32.977768  482683 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-293333/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-101897" cluster setting kubeconfig missing "embed-certs-101897" context setting]
	I1018 10:33:32.978272  482683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:32.979727  482683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:33:32.999446  482683 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 10:33:32.999483  482683 kubeadm.go:601] duration metric: took 43.697152ms to restartPrimaryControlPlane
	I1018 10:33:32.999499  482683 kubeadm.go:402] duration metric: took 200.200437ms to StartCluster
	I1018 10:33:32.999515  482683 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:32.999584  482683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:33:33.000963  482683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:33:33.001267  482683 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:33:33.001787  482683 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:33:33.001871  482683 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-101897"
	I1018 10:33:33.001894  482683 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-101897"
	W1018 10:33:33.001904  482683 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:33:33.001928  482683 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:33:33.002454  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:33.002780  482683 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:33:33.002849  482683 addons.go:69] Setting dashboard=true in profile "embed-certs-101897"
	I1018 10:33:33.002864  482683 addons.go:238] Setting addon dashboard=true in "embed-certs-101897"
	W1018 10:33:33.002871  482683 addons.go:247] addon dashboard should already be in state true
	I1018 10:33:33.002905  482683 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:33:33.003352  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:33.003774  482683 addons.go:69] Setting default-storageclass=true in profile "embed-certs-101897"
	I1018 10:33:33.003799  482683 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-101897"
	I1018 10:33:33.004046  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:33.008264  482683 out.go:179] * Verifying Kubernetes components...
	I1018 10:33:33.013436  482683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:33:33.056743  482683 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 10:33:33.060514  482683 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 10:33:33.063728  482683 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:33:33.222178  481899 node_ready.go:49] node "default-k8s-diff-port-715182" is "Ready"
	I1018 10:33:33.222204  481899 node_ready.go:38] duration metric: took 6.82496708s for node "default-k8s-diff-port-715182" to be "Ready" ...
	I1018 10:33:33.222218  481899 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:33:33.222276  481899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:33:36.751692  481899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.360778749s)
	I1018 10:33:36.751755  481899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.244152711s)
	I1018 10:33:36.752017  481899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.915393158s)
	I1018 10:33:36.752152  481899 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.529862232s)
	I1018 10:33:36.752166  481899 api_server.go:72] duration metric: took 10.793988924s to wait for apiserver process to appear ...
	I1018 10:33:36.752172  481899 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:33:36.752187  481899 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1018 10:33:36.755111  481899 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-715182 addons enable metrics-server
	
	I1018 10:33:36.768673  481899 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1018 10:33:36.769843  481899 api_server.go:141] control plane version: v1.34.1
	I1018 10:33:36.769869  481899 api_server.go:131] duration metric: took 17.690256ms to wait for apiserver health ...
	I1018 10:33:36.769879  481899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:33:36.779148  481899 system_pods.go:59] 8 kube-system pods found
	I1018 10:33:36.779185  481899 system_pods.go:61] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:33:36.779195  481899 system_pods.go:61] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:33:36.779202  481899 system_pods.go:61] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:33:36.779209  481899 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:33:36.779216  481899 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:33:36.779224  481899 system_pods.go:61] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:33:36.779232  481899 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:33:36.779238  481899 system_pods.go:61] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Running
	I1018 10:33:36.779246  481899 system_pods.go:74] duration metric: took 9.361339ms to wait for pod list to return data ...
	I1018 10:33:36.779259  481899 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:33:36.786823  481899 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 10:33:36.787732  481899 default_sa.go:45] found service account: "default"
	I1018 10:33:36.787749  481899 default_sa.go:55] duration metric: took 8.484341ms for default service account to be created ...
	I1018 10:33:36.787758  481899 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:33:36.789750  481899 addons.go:514] duration metric: took 10.83124061s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 10:33:36.795714  481899 system_pods.go:86] 8 kube-system pods found
	I1018 10:33:36.795743  481899 system_pods.go:89] "coredns-66bc5c9577-c2sb5" [2bf09318-3195-4ef2-a555-c4c945efa126] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:33:36.795752  481899 system_pods.go:89] "etcd-default-k8s-diff-port-715182" [13b11953-c29c-4d29-ae1b-ebce1e53f950] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:33:36.795758  481899 system_pods.go:89] "kindnet-zd5md" [e9eba0a5-422b-4250-b9b3-087619a17e95] Running
	I1018 10:33:36.795765  481899 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-715182" [823d4f57-e97b-4366-b670-121e096a2102] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:33:36.795771  481899 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-715182" [ad9c1831-0e8f-410e-a084-a4f84aeda8d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:33:36.795776  481899 system_pods.go:89] "kube-proxy-5whrp" [0b69ab6c-f661-4b7a-92ce-157440319945] Running
	I1018 10:33:36.795782  481899 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-715182" [7aa74f8f-2fa6-4ef0-9ee1-c81d0366174e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:33:36.795786  481899 system_pods.go:89] "storage-provisioner" [4e374f22-b5d4-4fc3-9c49-c35310ff348e] Running
	I1018 10:33:36.795793  481899 system_pods.go:126] duration metric: took 8.029534ms to wait for k8s-apps to be running ...
	I1018 10:33:36.795801  481899 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:33:36.795852  481899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:33:36.824445  481899 system_svc.go:56] duration metric: took 28.634176ms WaitForService to wait for kubelet
	I1018 10:33:36.824487  481899 kubeadm.go:586] duration metric: took 10.866303334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:33:36.824510  481899 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:33:36.827596  481899 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:33:36.827626  481899 node_conditions.go:123] node cpu capacity is 2
	I1018 10:33:36.827641  481899 node_conditions.go:105] duration metric: took 3.123627ms to run NodePressure ...
	I1018 10:33:36.827653  481899 start.go:241] waiting for startup goroutines ...
	I1018 10:33:36.827668  481899 start.go:246] waiting for cluster config update ...
	I1018 10:33:36.827687  481899 start.go:255] writing updated cluster config ...
	I1018 10:33:36.828001  481899 ssh_runner.go:195] Run: rm -f paused
	I1018 10:33:36.837642  481899 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:33:36.842777  481899 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c2sb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:33:33.063728  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 10:33:33.063817  482683 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 10:33:33.063893  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:33.067202  482683 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:33:33.067227  482683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:33:33.067290  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:33.079253  482683 addons.go:238] Setting addon default-storageclass=true in "embed-certs-101897"
	W1018 10:33:33.079281  482683 addons.go:247] addon default-storageclass should already be in state true
	I1018 10:33:33.079304  482683 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:33:33.081600  482683 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:33:33.105295  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:33.130816  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:33.151950  482683 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:33:33.151978  482683 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:33:33.152044  482683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:33:33.184848  482683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:33:33.497750  482683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:33:33.557782  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 10:33:33.557810  482683 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 10:33:33.571895  482683 node_ready.go:35] waiting up to 6m0s for node "embed-certs-101897" to be "Ready" ...
	I1018 10:33:33.576971  482683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:33:33.621173  482683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:33:33.641252  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 10:33:33.641276  482683 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 10:33:33.749903  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 10:33:33.749943  482683 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 10:33:33.883454  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 10:33:33.883486  482683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 10:33:34.077898  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 10:33:34.077926  482683 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 10:33:34.139783  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 10:33:34.139832  482683 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 10:33:34.184016  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 10:33:34.184060  482683 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 10:33:34.251244  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 10:33:34.251270  482683 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 10:33:34.309697  482683 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:33:34.309724  482683 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 10:33:34.341803  482683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1018 10:33:38.850113  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:40.852629  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	I1018 10:33:40.895449  482683 node_ready.go:49] node "embed-certs-101897" is "Ready"
	I1018 10:33:40.895483  482683 node_ready.go:38] duration metric: took 7.323544869s for node "embed-certs-101897" to be "Ready" ...
	I1018 10:33:40.895498  482683 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:33:40.895558  482683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:33:41.254617  482683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.677608468s)
	I1018 10:33:43.880727  482683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.259429695s)
	I1018 10:33:44.098399  482683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.756540357s)
	I1018 10:33:44.098609  482683 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.203034004s)
	I1018 10:33:44.098624  482683 api_server.go:72] duration metric: took 11.097322432s to wait for apiserver process to appear ...
	I1018 10:33:44.098631  482683 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:33:44.098661  482683 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:33:44.101588  482683 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-101897 addons enable metrics-server
	
	I1018 10:33:44.104665  482683 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W1018 10:33:42.852885  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:45.352570  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	I1018 10:33:44.107618  482683 addons.go:514] duration metric: took 11.105811606s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1018 10:33:44.121650  482683 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 10:33:44.123518  482683 api_server.go:141] control plane version: v1.34.1
	I1018 10:33:44.123553  482683 api_server.go:131] duration metric: took 24.910168ms to wait for apiserver health ...
	I1018 10:33:44.123563  482683 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:33:44.141349  482683 system_pods.go:59] 8 kube-system pods found
	I1018 10:33:44.141382  482683 system_pods.go:61] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:33:44.141392  482683 system_pods.go:61] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:33:44.141400  482683 system_pods.go:61] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:33:44.141407  482683 system_pods.go:61] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:33:44.141415  482683 system_pods.go:61] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:33:44.141420  482683 system_pods.go:61] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:33:44.141426  482683 system_pods.go:61] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:33:44.141430  482683 system_pods.go:61] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Running
	I1018 10:33:44.141436  482683 system_pods.go:74] duration metric: took 17.868295ms to wait for pod list to return data ...
	I1018 10:33:44.141444  482683 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:33:44.156813  482683 default_sa.go:45] found service account: "default"
	I1018 10:33:44.156835  482683 default_sa.go:55] duration metric: took 15.385076ms for default service account to be created ...
	I1018 10:33:44.156844  482683 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:33:44.244295  482683 system_pods.go:86] 8 kube-system pods found
	I1018 10:33:44.244385  482683 system_pods.go:89] "coredns-66bc5c9577-hxrmf" [0afa9baa-7349-44ad-ab0d-5a8cf04751c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:33:44.244411  482683 system_pods.go:89] "etcd-embed-certs-101897" [bdfd5bce-7d86-4e96-ada2-43cd7ea36ba9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:33:44.244445  482683 system_pods.go:89] "kindnet-qt6bn" [e8f627be-9c95-40c3-9c90-959737c71fc9] Running
	I1018 10:33:44.244477  482683 system_pods.go:89] "kube-apiserver-embed-certs-101897" [70a4bcb4-f0af-4bcf-9101-062ba75dbba9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:33:44.244504  482683 system_pods.go:89] "kube-controller-manager-embed-certs-101897" [c6ed118d-dbcd-457c-b23d-dac329134f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:33:44.244524  482683 system_pods.go:89] "kube-proxy-bp45x" [1fb88f61-5197-4234-b157-2c84ed2dd0f3] Running
	I1018 10:33:44.244565  482683 system_pods.go:89] "kube-scheduler-embed-certs-101897" [59f4e8f7-bba7-4029-918c-1f827651aecb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:33:44.244591  482683 system_pods.go:89] "storage-provisioner" [0d449f69-e21a-40a5-8c77-65c4665a58f5] Running
	I1018 10:33:44.244613  482683 system_pods.go:126] duration metric: took 87.762946ms to wait for k8s-apps to be running ...
	I1018 10:33:44.244634  482683 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:33:44.244736  482683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:33:44.283732  482683 system_svc.go:56] duration metric: took 39.089807ms WaitForService to wait for kubelet
	I1018 10:33:44.283771  482683 kubeadm.go:586] duration metric: took 11.282466848s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:33:44.283791  482683 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:33:44.305270  482683 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:33:44.305327  482683 node_conditions.go:123] node cpu capacity is 2
	I1018 10:33:44.305340  482683 node_conditions.go:105] duration metric: took 21.542984ms to run NodePressure ...
	I1018 10:33:44.305353  482683 start.go:241] waiting for startup goroutines ...
	I1018 10:33:44.305371  482683 start.go:246] waiting for cluster config update ...
	I1018 10:33:44.305389  482683 start.go:255] writing updated cluster config ...
	I1018 10:33:44.305699  482683 ssh_runner.go:195] Run: rm -f paused
	I1018 10:33:44.310610  482683 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:33:44.348302  482683 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hxrmf" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 10:33:46.357202  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:47.352714  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:49.856682  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:48.366121  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:50.854825  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:52.350707  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:54.849101  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:56.856103  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:52.859469  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:55.354714  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:57.356020  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:33:59.349009  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:34:01.849636  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:33:59.854180  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:01.854472  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:04.348809  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:34:06.349895  481899 pod_ready.go:104] pod "coredns-66bc5c9577-c2sb5" is not "Ready", error: <nil>
	W1018 10:34:04.354468  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:06.854197  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	I1018 10:34:07.849359  481899 pod_ready.go:94] pod "coredns-66bc5c9577-c2sb5" is "Ready"
	I1018 10:34:07.849390  481899 pod_ready.go:86] duration metric: took 31.006586317s for pod "coredns-66bc5c9577-c2sb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.852428  481899 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.858394  481899 pod_ready.go:94] pod "etcd-default-k8s-diff-port-715182" is "Ready"
	I1018 10:34:07.858422  481899 pod_ready.go:86] duration metric: took 5.964119ms for pod "etcd-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.860647  481899 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.864996  481899 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-715182" is "Ready"
	I1018 10:34:07.865024  481899 pod_ready.go:86] duration metric: took 4.350645ms for pod "kube-apiserver-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:07.867184  481899 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:08.048451  481899 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-715182" is "Ready"
	I1018 10:34:08.048526  481899 pod_ready.go:86] duration metric: took 181.314099ms for pod "kube-controller-manager-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:08.247819  481899 pod_ready.go:83] waiting for pod "kube-proxy-5whrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:08.646721  481899 pod_ready.go:94] pod "kube-proxy-5whrp" is "Ready"
	I1018 10:34:08.646752  481899 pod_ready.go:86] duration metric: took 398.903334ms for pod "kube-proxy-5whrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:08.846822  481899 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:09.248114  481899 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-715182" is "Ready"
	I1018 10:34:09.248142  481899 pod_ready.go:86] duration metric: took 401.293608ms for pod "kube-scheduler-default-k8s-diff-port-715182" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:09.248155  481899 pod_ready.go:40] duration metric: took 32.410477882s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:34:09.323758  481899 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:34:09.327026  481899 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-715182" cluster and "default" namespace by default
	W1018 10:34:08.857867  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:11.354896  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:13.854911  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:16.354459  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:18.854749  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:20.858487  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	W1018 10:34:23.356443  482683 pod_ready.go:104] pod "coredns-66bc5c9577-hxrmf" is not "Ready", error: <nil>
	I1018 10:34:24.354895  482683 pod_ready.go:94] pod "coredns-66bc5c9577-hxrmf" is "Ready"
	I1018 10:34:24.354918  482683 pod_ready.go:86] duration metric: took 40.006539572s for pod "coredns-66bc5c9577-hxrmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:24.359079  482683 pod_ready.go:83] waiting for pod "etcd-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:24.365857  482683 pod_ready.go:94] pod "etcd-embed-certs-101897" is "Ready"
	I1018 10:34:24.365882  482683 pod_ready.go:86] duration metric: took 6.780097ms for pod "etcd-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:24.368850  482683 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:24.374416  482683 pod_ready.go:94] pod "kube-apiserver-embed-certs-101897" is "Ready"
	I1018 10:34:24.374443  482683 pod_ready.go:86] duration metric: took 5.56232ms for pod "kube-apiserver-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:24.376943  482683 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:24.551671  482683 pod_ready.go:94] pod "kube-controller-manager-embed-certs-101897" is "Ready"
	I1018 10:34:24.551699  482683 pod_ready.go:86] duration metric: took 174.697941ms for pod "kube-controller-manager-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:24.751962  482683 pod_ready.go:83] waiting for pod "kube-proxy-bp45x" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:25.152062  482683 pod_ready.go:94] pod "kube-proxy-bp45x" is "Ready"
	I1018 10:34:25.152092  482683 pod_ready.go:86] duration metric: took 400.045178ms for pod "kube-proxy-bp45x" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:25.352459  482683 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:25.752631  482683 pod_ready.go:94] pod "kube-scheduler-embed-certs-101897" is "Ready"
	I1018 10:34:25.752654  482683 pod_ready.go:86] duration metric: took 400.1587ms for pod "kube-scheduler-embed-certs-101897" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:34:25.752667  482683 pod_ready.go:40] duration metric: took 41.441976257s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:34:25.842090  482683 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:34:25.845294  482683 out.go:179] * Done! kubectl is now configured to use "embed-certs-101897" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.493304018Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.501639734Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.501811724Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.501887187Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.505155892Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.505261756Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.505284271Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.508340076Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.508378812Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.508403633Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.511656945Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:14 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:14.511691407Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.218217338Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=232995bc-e26a-4f37-86a4-609759db2b3b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.221476599Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c9a562c5-4f37-4220-bd35-13f441f5b9d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.222652825Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq/dashboard-metrics-scraper" id=3b65d2cd-067e-4762-a6ca-a788b65acfe2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.222900557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.233506444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.234317768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.249735204Z" level=info msg="Created container 58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq/dashboard-metrics-scraper" id=3b65d2cd-067e-4762-a6ca-a788b65acfe2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.25068257Z" level=info msg="Starting container: 58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731" id=a2082faa-d084-4c16-bfe0-7946360898e8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.252445478Z" level=info msg="Started container" PID=1710 containerID=58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq/dashboard-metrics-scraper id=a2082faa-d084-4c16-bfe0-7946360898e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=137778e52673471a78a8658f67e1677f5afb63223135577262855af629642268
	Oct 18 10:34:15 default-k8s-diff-port-715182 conmon[1708]: conmon 58bca56ad6d8da6533a1 <ninfo>: container 1710 exited with status 1
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.759522187Z" level=info msg="Removing container: c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48" id=19c2ff2e-a91a-478a-9911-2cf5d23adc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.769865588Z" level=info msg="Error loading conmon cgroup of container c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48: cgroup deleted" id=19c2ff2e-a91a-478a-9911-2cf5d23adc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:15 default-k8s-diff-port-715182 crio[653]: time="2025-10-18T10:34:15.775884765Z" level=info msg="Removed container c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq/dashboard-metrics-scraper" id=19c2ff2e-a91a-478a-9911-2cf5d23adc9a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	58bca56ad6d8d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   2                   137778e526734       dashboard-metrics-scraper-6ffb444bf9-qcmdq             kubernetes-dashboard
	3927daefbc902       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   cacb30a2dd01d       storage-provisioner                                    kube-system
	fbcfeb81c2450       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   96da69e626d84       kubernetes-dashboard-855c9754f9-jqgfc                  kubernetes-dashboard
	7976b285f29ed       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   11543b640b56d       busybox                                                default
	8be7e98a5e3a3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   884a4ecdf436e       kube-proxy-5whrp                                       kube-system
	1e44f1527d991       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   cacb30a2dd01d       storage-provisioner                                    kube-system
	5f57f3b79a652       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   551d67a32a043       coredns-66bc5c9577-c2sb5                               kube-system
	826c12b3cbdbb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   40db764fe695a       kindnet-zd5md                                          kube-system
	8ac924f2c8ba4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9b0bfaa1a9fa8       kube-apiserver-default-k8s-diff-port-715182            kube-system
	a31ff6775bd9d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   00452da05a177       kube-scheduler-default-k8s-diff-port-715182            kube-system
	dfb7c0f4f545b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   754a81ee0ab85       etcd-default-k8s-diff-port-715182                      kube-system
	5e58508b5c574       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   bc67afdd1faac       kube-controller-manager-default-k8s-diff-port-715182   kube-system
	
	
	==> coredns [5f57f3b79a65261acd514378b0cfe0de5a23d594bd4cb2d6e4f39b8be06c40eb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57004 - 48345 "HINFO IN 6760024403817757506.6651131778930793649. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027111461s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-715182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-715182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=default-k8s-diff-port-715182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_32_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:32:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-715182
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:34:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:34:04 +0000   Sat, 18 Oct 2025 10:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:34:04 +0000   Sat, 18 Oct 2025 10:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:34:04 +0000   Sat, 18 Oct 2025 10:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:34:04 +0000   Sat, 18 Oct 2025 10:32:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-715182
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c53d6dae-7a14-4045-ac49-41d96155b5e4
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-c2sb5                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-default-k8s-diff-port-715182                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-zd5md                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-715182             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-715182    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-5whrp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-715182             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qcmdq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jqgfc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m18s                  node-controller  Node default-k8s-diff-port-715182 event: Registered Node default-k8s-diff-port-715182 in Controller
	  Normal   NodeReady                97s                    kubelet          Node default-k8s-diff-port-715182 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-715182 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-715182 event: Registered Node default-k8s-diff-port-715182 in Controller
	
	
	==> dmesg <==
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dfb7c0f4f545b605ddeb04490e8932c6ffb5e4afea7c622fa1cf23b6e8f53ed7] <==
	{"level":"warn","ts":"2025-10-18T10:33:29.414972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.457363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.509806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.535182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.624290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.661923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.705939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.777463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.838111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.901661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.941953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:29.985442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.060117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.147784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.186151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.257670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.279680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.296674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.349890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.413496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.479846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.530536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.568703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.598717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:30.717795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:34:27 up  2:16,  0 user,  load average: 4.15, 4.24, 3.28
	Linux default-k8s-diff-port-715182 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [826c12b3cbdbbe27f0afcdc885a68c29c788841c412a5f5620cccaaa4752469b] <==
	I1018 10:33:34.123899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:33:34.163768       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:33:34.163891       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:33:34.163904       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:33:34.163919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:33:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:33:34.486475       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:33:34.486514       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:33:34.486523       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:33:34.528976       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:34:04.480676       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 10:34:04.486822       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:34:04.486767       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 10:34:04.487180       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 10:34:06.086991       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:34:06.087040       1 metrics.go:72] Registering metrics
	I1018 10:34:06.087109       1 controller.go:711] "Syncing nftables rules"
	I1018 10:34:14.480453       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:34:14.480514       1 main.go:301] handling current node
	I1018 10:34:24.478377       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:34:24.478418       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8ac924f2c8ba493d59bc3b60efaa16f38faf443aab37d62f891a1809134404cc] <==
	I1018 10:33:33.317992       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 10:33:33.318061       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:33:33.349095       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:33:33.351402       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 10:33:33.351421       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:33:33.351500       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 10:33:33.351542       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 10:33:33.352147       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 10:33:33.366299       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 10:33:33.366648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 10:33:33.385300       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:33:33.390095       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 10:33:33.436829       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:33:33.488215       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:33:33.651479       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1018 10:33:33.735988       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 10:33:35.404113       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:33:35.717837       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:33:35.941753       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:33:36.074363       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:33:36.360795       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.152.119"}
	I1018 10:33:36.425406       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.195.75"}
	I1018 10:33:39.204834       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:33:39.301389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:33:39.660145       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5e58508b5c574d2cb44d4c48ef46f9795889437a9e76ee1a3215fd6336add58e] <==
	I1018 10:33:39.138860       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:33:39.140934       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:33:39.142115       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:33:39.148449       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 10:33:39.150685       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 10:33:39.153966       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 10:33:39.154166       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 10:33:39.155375       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:33:39.157547       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 10:33:39.158753       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:33:39.159870       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:33:39.172127       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 10:33:39.174412       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 10:33:39.176729       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 10:33:39.189993       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:33:39.190169       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:33:39.190203       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:33:39.190231       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:33:39.190318       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 10:33:39.190418       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 10:33:39.190512       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-715182"
	I1018 10:33:39.190582       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 10:33:39.190656       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 10:33:39.190699       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 10:33:39.200950       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [8be7e98a5e3a38ee08d839bf119f903facc309086fe2401359c1cda7829fdc9a] <==
	I1018 10:33:36.240108       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:33:36.503154       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:33:36.611052       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:33:36.611160       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:33:36.611274       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:33:36.825165       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:33:36.825347       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:33:36.849147       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:33:36.849612       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:33:36.849812       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:33:36.867561       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:33:36.867657       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:33:36.867793       1 config.go:200] "Starting service config controller"
	I1018 10:33:36.867832       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:33:36.867879       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:33:36.867921       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:33:36.873543       1 config.go:309] "Starting node config controller"
	I1018 10:33:36.875536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:33:36.875634       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:33:36.969147       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 10:33:36.969273       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:33:36.969310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a31ff6775bd9d5e70b70f707aacc8fb2ae23fea962bd975f954a8c39da5690e9] <==
	I1018 10:33:31.193123       1 serving.go:386] Generated self-signed cert in-memory
	I1018 10:33:35.906254       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:33:35.906375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:33:35.954673       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:33:35.954786       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 10:33:35.954810       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 10:33:35.954837       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:33:35.957030       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:33:35.957044       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:33:35.957063       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:33:35.957069       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:33:36.073577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:33:36.073693       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 10:33:36.073779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:33:34 default-k8s-diff-port-715182 kubelet[775]: W1018 10:33:34.305416     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/crio-11543b640b56df3d566561360d7736f3ec7e77a43b4c7caa13de6d5125d69f44 WatchSource:0}: Error finding container 11543b640b56df3d566561360d7736f3ec7e77a43b4c7caa13de6d5125d69f44: Status 404 returned error can't find the container with id 11543b640b56df3d566561360d7736f3ec7e77a43b4c7caa13de6d5125d69f44
	Oct 18 10:33:39 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:39.747446     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c554b1db-a745-4da6-9d1f-3d4e2759b03e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jqgfc\" (UID: \"c554b1db-a745-4da6-9d1f-3d4e2759b03e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqgfc"
	Oct 18 10:33:39 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:39.748069     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4thv\" (UniqueName: \"kubernetes.io/projected/c554b1db-a745-4da6-9d1f-3d4e2759b03e-kube-api-access-c4thv\") pod \"kubernetes-dashboard-855c9754f9-jqgfc\" (UID: \"c554b1db-a745-4da6-9d1f-3d4e2759b03e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqgfc"
	Oct 18 10:33:39 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:39.748242     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/432ee2d6-624c-468c-bde9-bf97729e1988-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qcmdq\" (UID: \"432ee2d6-624c-468c-bde9-bf97729e1988\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq"
	Oct 18 10:33:39 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:39.748382     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h2d8\" (UniqueName: \"kubernetes.io/projected/432ee2d6-624c-468c-bde9-bf97729e1988-kube-api-access-8h2d8\") pod \"dashboard-metrics-scraper-6ffb444bf9-qcmdq\" (UID: \"432ee2d6-624c-468c-bde9-bf97729e1988\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq"
	Oct 18 10:33:40 default-k8s-diff-port-715182 kubelet[775]: W1018 10:33:40.085513     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2afd5447007b423c9171d584dfa6ce3f6e1835eb9f2d9050f7c856688a594c6f/crio-137778e52673471a78a8658f67e1677f5afb63223135577262855af629642268 WatchSource:0}: Error finding container 137778e52673471a78a8658f67e1677f5afb63223135577262855af629642268: Status 404 returned error can't find the container with id 137778e52673471a78a8658f67e1677f5afb63223135577262855af629642268
	Oct 18 10:33:48 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:48.704597     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqgfc" podStartSLOduration=1.755749665 podStartE2EDuration="9.704580867s" podCreationTimestamp="2025-10-18 10:33:39 +0000 UTC" firstStartedPulling="2025-10-18 10:33:40.093663459 +0000 UTC m=+15.040080545" lastFinishedPulling="2025-10-18 10:33:48.042494661 +0000 UTC m=+22.988911747" observedRunningTime="2025-10-18 10:33:48.704105292 +0000 UTC m=+23.650522386" watchObservedRunningTime="2025-10-18 10:33:48.704580867 +0000 UTC m=+23.650997961"
	Oct 18 10:33:54 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:54.699580     775 scope.go:117] "RemoveContainer" containerID="f2b87e2fd82851ba776e7c202eda2438401fbb245d0d1cf1badc69d1c52efb18"
	Oct 18 10:33:55 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:55.705937     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:33:55 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:55.713027     775 scope.go:117] "RemoveContainer" containerID="f2b87e2fd82851ba776e7c202eda2438401fbb245d0d1cf1badc69d1c52efb18"
	Oct 18 10:33:55 default-k8s-diff-port-715182 kubelet[775]: E1018 10:33:55.727790     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:33:56 default-k8s-diff-port-715182 kubelet[775]: I1018 10:33:56.707324     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:33:56 default-k8s-diff-port-715182 kubelet[775]: E1018 10:33:56.707907     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:34:00 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:00.027188     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:34:00 default-k8s-diff-port-715182 kubelet[775]: E1018 10:34:00.028009     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:34:05 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:05.730033     775 scope.go:117] "RemoveContainer" containerID="1e44f1527d99187a2cfb9fe74a914deb93372eeeee161687d7f9c60126af645c"
	Oct 18 10:34:15 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:15.217477     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:34:15 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:15.757480     775 scope.go:117] "RemoveContainer" containerID="c9b28eb14dbadc2cd4f140f14ad6e8a495a4ca27648ffb2c66c423c82a6d9e48"
	Oct 18 10:34:15 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:15.757828     775 scope.go:117] "RemoveContainer" containerID="58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731"
	Oct 18 10:34:15 default-k8s-diff-port-715182 kubelet[775]: E1018 10:34:15.758068     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:34:20 default-k8s-diff-port-715182 kubelet[775]: I1018 10:34:20.014706     775 scope.go:117] "RemoveContainer" containerID="58bca56ad6d8da6533a12ae09eebb734cc7b7537a1fc6fb47e782b3b7b5be731"
	Oct 18 10:34:20 default-k8s-diff-port-715182 kubelet[775]: E1018 10:34:20.014914     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qcmdq_kubernetes-dashboard(432ee2d6-624c-468c-bde9-bf97729e1988)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qcmdq" podUID="432ee2d6-624c-468c-bde9-bf97729e1988"
	Oct 18 10:34:21 default-k8s-diff-port-715182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:34:21 default-k8s-diff-port-715182 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:34:21 default-k8s-diff-port-715182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [fbcfeb81c2450f4d9bcac716f9e0712d078fee0206275c2807833c4628397eaa] <==
	2025/10/18 10:33:48 Starting overwatch
	2025/10/18 10:33:48 Using namespace: kubernetes-dashboard
	2025/10/18 10:33:48 Using in-cluster config to connect to apiserver
	2025/10/18 10:33:48 Using secret token for csrf signing
	2025/10/18 10:33:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 10:33:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 10:33:48 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 10:33:48 Generating JWE encryption key
	2025/10/18 10:33:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 10:33:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 10:33:48 Initializing JWE encryption key from synchronized object
	2025/10/18 10:33:48 Creating in-cluster Sidecar client
	2025/10/18 10:33:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:33:48 Serving insecurely on HTTP port: 9090
	2025/10/18 10:34:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1e44f1527d99187a2cfb9fe74a914deb93372eeeee161687d7f9c60126af645c] <==
	I1018 10:33:35.575074       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 10:34:05.584404       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3927daefbc902bf8510c1cdc5663cd63c2b5f4102fb088ba05776992e0758eed] <==
	I1018 10:34:05.787765       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 10:34:05.800053       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:34:05.800102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 10:34:05.803053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:09.260100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:13.521052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:17.119368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:20.173590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:23.196732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:23.204037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:34:23.204226       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:34:23.204321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d2469c43-20ab-4e8a-ab93-03156d0280d3", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-715182_ac8c4e63-d390-472c-8b81-451c630f23eb became leader
	I1018 10:34:23.204545       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715182_ac8c4e63-d390-472c-8b81-451c630f23eb!
	W1018 10:34:23.220415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:23.228837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:34:23.305699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-715182_ac8c4e63-d390-472c-8b81-451c630f23eb!
	W1018 10:34:25.232716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:25.240866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:27.245466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:27.251655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182: exit status 2 (369.998904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-715182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-101897 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-101897 --alsologtostderr -v=1: exit status 80 (2.218539874s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-101897 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:34:37.790740  488902 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:34:37.791036  488902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:37.791067  488902 out.go:374] Setting ErrFile to fd 2...
	I1018 10:34:37.791086  488902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:37.791375  488902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:34:37.791713  488902 out.go:368] Setting JSON to false
	I1018 10:34:37.791768  488902 mustload.go:65] Loading cluster: embed-certs-101897
	I1018 10:34:37.792285  488902 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:34:37.793164  488902 cli_runner.go:164] Run: docker container inspect embed-certs-101897 --format={{.State.Status}}
	I1018 10:34:37.813756  488902 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:34:37.814088  488902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:34:37.908924  488902 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:70 SystemTime:2025-10-18 10:34:37.895820206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:34:37.909584  488902 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-101897 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 10:34:37.912151  488902 out.go:179] * Pausing node embed-certs-101897 ... 
	I1018 10:34:37.913408  488902 host.go:66] Checking if "embed-certs-101897" exists ...
	I1018 10:34:37.913726  488902 ssh_runner.go:195] Run: systemctl --version
	I1018 10:34:37.913786  488902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-101897
	I1018 10:34:37.939005  488902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/embed-certs-101897/id_rsa Username:docker}
	I1018 10:34:38.046212  488902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:34:38.069719  488902 pause.go:52] kubelet running: true
	I1018 10:34:38.069816  488902 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:34:38.368342  488902 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:34:38.368434  488902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:34:38.464446  488902 cri.go:89] found id: "82aa17e63acc5c9ee30d47b290ef374b230450f3fa79a6048d75d01c95af1229"
	I1018 10:34:38.464475  488902 cri.go:89] found id: "444cf18cb855af9d1e68665dcc30cb1b65a4ea7542eeb0eca0c74e9c0eb2d3ff"
	I1018 10:34:38.464480  488902 cri.go:89] found id: "73ffb19e0ddc3982b14bf8a1380764785f013d969977dd645a673cd8aef57ec1"
	I1018 10:34:38.464484  488902 cri.go:89] found id: "34ddaa028d2f62182e466e60e132fe6f57e28a5686a9fbc0662ab810e428fde4"
	I1018 10:34:38.464488  488902 cri.go:89] found id: "1cd79be1ea9aff71ffca848e73458fa728047d78d2e68ae5aeb1565abb1f298c"
	I1018 10:34:38.464492  488902 cri.go:89] found id: "0a5b488299c29fa8c745b6bbd5d7b3db828119f52e047c424ea4b9156c222088"
	I1018 10:34:38.464495  488902 cri.go:89] found id: "ddb705e0f64d66513424efc45237983978c1000f91094a9731d126dd8cab8ac7"
	I1018 10:34:38.464498  488902 cri.go:89] found id: "ea13a5fdbf596d27a2a9bdd7254f8af427b96bdad19fa1221e096954a6b07ec4"
	I1018 10:34:38.464502  488902 cri.go:89] found id: "98749e78e236d9c4ba517df85eb017b3e2daf5eb1d15c7618a96f229e9c048e9"
	I1018 10:34:38.464513  488902 cri.go:89] found id: "0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc"
	I1018 10:34:38.464522  488902 cri.go:89] found id: "bedd687931a6c72eba28323577cb0d8ab111b649fbb82f440e5a34bc42246086"
	I1018 10:34:38.464525  488902 cri.go:89] found id: ""
	I1018 10:34:38.464575  488902 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:34:38.484700  488902 retry.go:31] will retry after 371.608342ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:34:38Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:34:38.856981  488902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:34:38.872629  488902 pause.go:52] kubelet running: false
	I1018 10:34:38.872695  488902 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:34:39.062280  488902 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:34:39.062359  488902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:34:39.154621  488902 cri.go:89] found id: "82aa17e63acc5c9ee30d47b290ef374b230450f3fa79a6048d75d01c95af1229"
	I1018 10:34:39.154640  488902 cri.go:89] found id: "444cf18cb855af9d1e68665dcc30cb1b65a4ea7542eeb0eca0c74e9c0eb2d3ff"
	I1018 10:34:39.154645  488902 cri.go:89] found id: "73ffb19e0ddc3982b14bf8a1380764785f013d969977dd645a673cd8aef57ec1"
	I1018 10:34:39.154657  488902 cri.go:89] found id: "34ddaa028d2f62182e466e60e132fe6f57e28a5686a9fbc0662ab810e428fde4"
	I1018 10:34:39.154661  488902 cri.go:89] found id: "1cd79be1ea9aff71ffca848e73458fa728047d78d2e68ae5aeb1565abb1f298c"
	I1018 10:34:39.154668  488902 cri.go:89] found id: "0a5b488299c29fa8c745b6bbd5d7b3db828119f52e047c424ea4b9156c222088"
	I1018 10:34:39.154671  488902 cri.go:89] found id: "ddb705e0f64d66513424efc45237983978c1000f91094a9731d126dd8cab8ac7"
	I1018 10:34:39.154674  488902 cri.go:89] found id: "ea13a5fdbf596d27a2a9bdd7254f8af427b96bdad19fa1221e096954a6b07ec4"
	I1018 10:34:39.154677  488902 cri.go:89] found id: "98749e78e236d9c4ba517df85eb017b3e2daf5eb1d15c7618a96f229e9c048e9"
	I1018 10:34:39.154683  488902 cri.go:89] found id: "0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc"
	I1018 10:34:39.154686  488902 cri.go:89] found id: "bedd687931a6c72eba28323577cb0d8ab111b649fbb82f440e5a34bc42246086"
	I1018 10:34:39.154689  488902 cri.go:89] found id: ""
	I1018 10:34:39.154735  488902 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:34:39.183789  488902 retry.go:31] will retry after 431.336014ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:34:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:34:39.615428  488902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:34:39.630288  488902 pause.go:52] kubelet running: false
	I1018 10:34:39.630353  488902 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:34:39.842667  488902 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:34:39.842746  488902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:34:39.927817  488902 cri.go:89] found id: "82aa17e63acc5c9ee30d47b290ef374b230450f3fa79a6048d75d01c95af1229"
	I1018 10:34:39.927835  488902 cri.go:89] found id: "444cf18cb855af9d1e68665dcc30cb1b65a4ea7542eeb0eca0c74e9c0eb2d3ff"
	I1018 10:34:39.927841  488902 cri.go:89] found id: "73ffb19e0ddc3982b14bf8a1380764785f013d969977dd645a673cd8aef57ec1"
	I1018 10:34:39.927844  488902 cri.go:89] found id: "34ddaa028d2f62182e466e60e132fe6f57e28a5686a9fbc0662ab810e428fde4"
	I1018 10:34:39.927849  488902 cri.go:89] found id: "1cd79be1ea9aff71ffca848e73458fa728047d78d2e68ae5aeb1565abb1f298c"
	I1018 10:34:39.927852  488902 cri.go:89] found id: "0a5b488299c29fa8c745b6bbd5d7b3db828119f52e047c424ea4b9156c222088"
	I1018 10:34:39.927856  488902 cri.go:89] found id: "ddb705e0f64d66513424efc45237983978c1000f91094a9731d126dd8cab8ac7"
	I1018 10:34:39.927859  488902 cri.go:89] found id: "ea13a5fdbf596d27a2a9bdd7254f8af427b96bdad19fa1221e096954a6b07ec4"
	I1018 10:34:39.927862  488902 cri.go:89] found id: "98749e78e236d9c4ba517df85eb017b3e2daf5eb1d15c7618a96f229e9c048e9"
	I1018 10:34:39.927868  488902 cri.go:89] found id: "0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc"
	I1018 10:34:39.927872  488902 cri.go:89] found id: "bedd687931a6c72eba28323577cb0d8ab111b649fbb82f440e5a34bc42246086"
	I1018 10:34:39.927875  488902 cri.go:89] found id: ""
	I1018 10:34:39.927922  488902 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:34:39.941927  488902 out.go:203] 
	W1018 10:34:39.943065  488902 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:34:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:34:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 10:34:39.943253  488902 out.go:285] * 
	* 
	W1018 10:34:39.951298  488902 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 10:34:39.953086  488902 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-101897 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-101897
helpers_test.go:243: (dbg) docker inspect embed-certs-101897:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6",
	        "Created": "2025-10-18T10:31:37.027393759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482913,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:33:22.813067208Z",
	            "FinishedAt": "2025-10-18T10:33:21.827772315Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/hosts",
	        "LogPath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6-json.log",
	        "Name": "/embed-certs-101897",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-101897:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-101897",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6",
	                "LowerDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-101897",
	                "Source": "/var/lib/docker/volumes/embed-certs-101897/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-101897",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-101897",
	                "name.minikube.sigs.k8s.io": "embed-certs-101897",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e1e7ea96b603c15490122dae014747cc570a41918d84f3ef43639b19011b9f69",
	            "SandboxKey": "/var/run/docker/netns/e1e7ea96b603",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-101897": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:71:43:ac:c6:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6f55db3cc24016b7f9a5c2a2cb317625e0e1d0053a68e0f05bbc6f3ae8ab71a",
	                    "EndpointID": "80c7bf53cacd944208bec54baf5dce39e5e411fbe69c0a0cc1bcc27c05e3bd8c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-101897",
	                        "a8859be818ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-101897 -n embed-certs-101897
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-101897 -n embed-certs-101897: exit status 2 (552.548459ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-101897 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-101897 logs -n 25: (1.801952382s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-309062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-309062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-733799                                                                                                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-715182 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-101897 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-715182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-101897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-715182 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-922359                                                                                                                                                                                                               │ disable-driver-mounts-922359 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ image   │ embed-certs-101897 image list --format=json                                                                                                                                                                                                   │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p embed-certs-101897 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:34:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:34:31.678391  487845 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:34:31.678508  487845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:31.678514  487845 out.go:374] Setting ErrFile to fd 2...
	I1018 10:34:31.678519  487845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:31.678909  487845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:34:31.679470  487845 out.go:368] Setting JSON to false
	I1018 10:34:31.680415  487845 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8222,"bootTime":1760775450,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:34:31.680501  487845 start.go:141] virtualization:  
	I1018 10:34:31.687718  487845 out.go:179] * [no-preload-027087] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:34:31.691490  487845 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:34:31.691568  487845 notify.go:220] Checking for updates...
	I1018 10:34:31.699014  487845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:34:31.702334  487845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:34:31.705646  487845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:34:31.708821  487845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:34:31.712930  487845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:34:31.716820  487845 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:34:31.717018  487845 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:34:31.748966  487845 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:34:31.749095  487845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:34:31.840263  487845 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:34:31.828996055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:34:31.840377  487845 docker.go:318] overlay module found
	I1018 10:34:31.843473  487845 out.go:179] * Using the docker driver based on user configuration
	I1018 10:34:31.846715  487845 start.go:305] selected driver: docker
	I1018 10:34:31.846771  487845 start.go:925] validating driver "docker" against <nil>
	I1018 10:34:31.846821  487845 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:34:31.849088  487845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:34:31.931142  487845 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:34:31.919236549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:34:31.931316  487845 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 10:34:31.931553  487845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:34:31.934516  487845 out.go:179] * Using Docker driver with root privileges
	I1018 10:34:31.937391  487845 cni.go:84] Creating CNI manager for ""
	I1018 10:34:31.937479  487845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:34:31.937496  487845 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 10:34:31.937580  487845 start.go:349] cluster config:
	{Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:34:31.940865  487845 out.go:179] * Starting "no-preload-027087" primary control-plane node in "no-preload-027087" cluster
	I1018 10:34:31.944583  487845 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:34:31.947581  487845 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:34:31.950553  487845 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:34:31.950631  487845 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:34:31.950955  487845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/config.json ...
	I1018 10:34:31.950988  487845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/config.json: {Name:mkcd35f6ee370444afb52d25334f8712a4892472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:34:31.954380  487845 cache.go:107] acquiring lock: {Name:mkaf3d4648d07ea61f5c43b4ac6cff6e96e07d0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.954517  487845 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 10:34:31.954531  487845 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.282311ms
	I1018 10:34:31.954623  487845 cache.go:107] acquiring lock: {Name:mkce90ae98faaf046844c77feccd02a8c89b22bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.955546  487845 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 10:34:31.954651  487845 cache.go:107] acquiring lock: {Name:mkaa713f6c6c749f7890994ea47ccb489ab7b76a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.955671  487845 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 10:34:31.955759  487845 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 10:34:31.954672  487845 cache.go:107] acquiring lock: {Name:mkbf154924b5d05f1add0f80d2d8992cab46ca22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956105  487845 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 10:34:31.954689  487845 cache.go:107] acquiring lock: {Name:mk7c500c022aee187177cdcb3e6cd138895cc689 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956248  487845 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 10:34:31.954706  487845 cache.go:107] acquiring lock: {Name:mk8d87cb313c81485b1cabba19862a22e85903db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956410  487845 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 10:34:31.954730  487845 cache.go:107] acquiring lock: {Name:mkf60d23fd6f24668b2e7aa1b277366e0a8c4f15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956589  487845 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 10:34:31.954746  487845 cache.go:107] acquiring lock: {Name:mk79330e484fcb6a5af61229914c16bea91c5633 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956775  487845 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 10:34:31.958664  487845 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 10:34:31.958825  487845 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 10:34:31.960599  487845 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 10:34:31.960853  487845 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 10:34:31.961661  487845 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 10:34:31.962655  487845 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 10:34:31.962750  487845 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 10:34:31.978062  487845 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:34:31.978087  487845 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:34:31.978100  487845 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:34:31.978123  487845 start.go:360] acquireMachinesLock for no-preload-027087: {Name:mk3407a2c92d7e64b372433da7fc52893eca365e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.978235  487845 start.go:364] duration metric: took 90.347µs to acquireMachinesLock for "no-preload-027087"
	I1018 10:34:31.978268  487845 start.go:93] Provisioning new machine with config: &{Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:34:31.978342  487845 start.go:125] createHost starting for "" (driver="docker")
	I1018 10:34:31.982007  487845 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:34:31.982290  487845 start.go:159] libmachine.API.Create for "no-preload-027087" (driver="docker")
	I1018 10:34:31.982336  487845 client.go:168] LocalClient.Create starting
	I1018 10:34:31.982422  487845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:34:31.982460  487845 main.go:141] libmachine: Decoding PEM data...
	I1018 10:34:31.982477  487845 main.go:141] libmachine: Parsing certificate...
	I1018 10:34:31.982541  487845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:34:31.982563  487845 main.go:141] libmachine: Decoding PEM data...
	I1018 10:34:31.982574  487845 main.go:141] libmachine: Parsing certificate...
	I1018 10:34:31.982942  487845 cli_runner.go:164] Run: docker network inspect no-preload-027087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:34:32.005648  487845 cli_runner.go:211] docker network inspect no-preload-027087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:34:32.005760  487845 network_create.go:284] running [docker network inspect no-preload-027087] to gather additional debugging logs...
	I1018 10:34:32.005786  487845 cli_runner.go:164] Run: docker network inspect no-preload-027087
	W1018 10:34:32.024079  487845 cli_runner.go:211] docker network inspect no-preload-027087 returned with exit code 1
	I1018 10:34:32.024109  487845 network_create.go:287] error running [docker network inspect no-preload-027087]: docker network inspect no-preload-027087: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-027087 not found
	I1018 10:34:32.024122  487845 network_create.go:289] output of [docker network inspect no-preload-027087]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-027087 not found
	
	** /stderr **
	I1018 10:34:32.024228  487845 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:34:32.043427  487845 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:34:32.043710  487845 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:34:32.044067  487845 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:34:32.044592  487845 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c8a070}
	I1018 10:34:32.044652  487845 network_create.go:124] attempt to create docker network no-preload-027087 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 10:34:32.044743  487845 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-027087 no-preload-027087
	I1018 10:34:32.122836  487845 network_create.go:108] docker network no-preload-027087 192.168.76.0/24 created
	I1018 10:34:32.122885  487845 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-027087" container
	I1018 10:34:32.122983  487845 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:34:32.140807  487845 cli_runner.go:164] Run: docker volume create no-preload-027087 --label name.minikube.sigs.k8s.io=no-preload-027087 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:34:32.158612  487845 oci.go:103] Successfully created a docker volume no-preload-027087
	I1018 10:34:32.158704  487845 cli_runner.go:164] Run: docker run --rm --name no-preload-027087-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-027087 --entrypoint /usr/bin/test -v no-preload-027087:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 10:34:32.300803  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 10:34:32.300866  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 10:34:32.301603  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 10:34:32.316247  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 10:34:32.318475  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 10:34:32.322078  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 10:34:32.336933  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 10:34:32.363950  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 10:34:32.364007  487845 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 409.275131ms
	I1018 10:34:32.364043  487845 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 10:34:32.729121  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 10:34:32.729155  487845 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 774.534688ms
	I1018 10:34:32.729214  487845 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 10:34:32.836214  487845 oci.go:107] Successfully prepared a docker volume no-preload-027087
	I1018 10:34:32.836251  487845 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1018 10:34:32.836416  487845 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:34:32.836537  487845 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:34:32.893541  487845 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-027087 --name no-preload-027087 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-027087 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-027087 --network no-preload-027087 --ip 192.168.76.2 --volume no-preload-027087:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:34:33.213210  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 10:34:33.213255  487845 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.258565102s
	I1018 10:34:33.213270  487845 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 10:34:33.314923  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 10:34:33.314947  487845 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.360198862s
	I1018 10:34:33.314958  487845 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 10:34:33.325542  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 10:34:33.325567  487845 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.370895082s
	I1018 10:34:33.325579  487845 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 10:34:33.339418  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 10:34:33.339443  487845 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.384791446s
	I1018 10:34:33.339454  487845 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 10:34:33.402553  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Running}}
	I1018 10:34:33.436308  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:34:33.487021  487845 cli_runner.go:164] Run: docker exec no-preload-027087 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:34:33.562101  487845 oci.go:144] the created container "no-preload-027087" has a running status.
	I1018 10:34:33.562141  487845 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa...
	I1018 10:34:34.231642  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 10:34:34.231674  487845 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.276966792s
	I1018 10:34:34.231708  487845 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 10:34:34.231725  487845 cache.go:87] Successfully saved all images to host disk.
	I1018 10:34:34.334061  487845 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:34:34.352249  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:34:34.371045  487845 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:34:34.371063  487845 kic_runner.go:114] Args: [docker exec --privileged no-preload-027087 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:34:34.411450  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:34:34.429002  487845 machine.go:93] provisionDockerMachine start ...
	I1018 10:34:34.429285  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:34.447342  487845 main.go:141] libmachine: Using SSH client type: native
	I1018 10:34:34.447678  487845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1018 10:34:34.447702  487845 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:34:34.448474  487845 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	
	
	==> CRI-O <==
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.710841599Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67d15e67-5c33-4b10-a256-e5b1fabdc937 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.712187649Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5637eb54-b4fe-4ef5-828b-1f1ead484acb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.713266406Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss/dashboard-metrics-scraper" id=8b86da07-c51f-4476-bfba-481fe11126f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.713505161Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.722296768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.72291481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.738433458Z" level=info msg="Created container 0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss/dashboard-metrics-scraper" id=8b86da07-c51f-4476-bfba-481fe11126f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.741451207Z" level=info msg="Starting container: 0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc" id=0009b883-9e2f-49c6-b8fa-5e6bd91ac652 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.746161397Z" level=info msg="Started container" PID=1645 containerID=0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss/dashboard-metrics-scraper id=0009b883-9e2f-49c6-b8fa-5e6bd91ac652 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f08de577d29e7cbb3bc2c085a26e0a4e9596b3540bb953da831bbd589e5a9c70
	Oct 18 10:34:20 embed-certs-101897 conmon[1643]: conmon 0f2ffe2bb4ec0f77d67b <ninfo>: container 1645 exited with status 1
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.992187343Z" level=info msg="Removing container: ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f" id=dae8c2eb-c7ce-49e5-8637-c83638142a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:21 embed-certs-101897 crio[651]: time="2025-10-18T10:34:21.002517051Z" level=info msg="Error loading conmon cgroup of container ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f: cgroup deleted" id=dae8c2eb-c7ce-49e5-8637-c83638142a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:21 embed-certs-101897 crio[651]: time="2025-10-18T10:34:21.00863895Z" level=info msg="Removed container ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss/dashboard-metrics-scraper" id=dae8c2eb-c7ce-49e5-8637-c83638142a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.047514897Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.052151643Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.052188558Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.052212066Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.055484193Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.055639969Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.055721062Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.059008483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.059164259Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.059236236Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.062507427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.062546706Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0f2ffe2bb4ec0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   f08de577d29e7       dashboard-metrics-scraper-6ffb444bf9-rhqss   kubernetes-dashboard
	82aa17e63acc5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   cd187e89281dc       storage-provisioner                          kube-system
	bedd687931a6c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   2f09030753b5e       kubernetes-dashboard-855c9754f9-7tlh9        kubernetes-dashboard
	444cf18cb855a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   d1fba4e0b27f0       coredns-66bc5c9577-hxrmf                     kube-system
	73ffb19e0ddc3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   a7ff423efb986       kube-proxy-bp45x                             kube-system
	0b098b4bb6d10       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   2d2402090cade       busybox                                      default
	34ddaa028d2f6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   da408a3b35ca0       kindnet-qt6bn                                kube-system
	1cd79be1ea9af       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   cd187e89281dc       storage-provisioner                          kube-system
	0a5b488299c29       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   46e087113e954       kube-controller-manager-embed-certs-101897   kube-system
	ddb705e0f64d6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   bb5a370bd9bc8       kube-scheduler-embed-certs-101897            kube-system
	ea13a5fdbf596       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   af3a372759917       etcd-embed-certs-101897                      kube-system
	98749e78e236d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c7b66600ddd7d       kube-apiserver-embed-certs-101897            kube-system
	
	
	==> coredns [444cf18cb855af9d1e68665dcc30cb1b65a4ea7542eeb0eca0c74e9c0eb2d3ff] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50469 - 65026 "HINFO IN 306567304835938979.3027967906319139934. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.039085312s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-101897
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-101897
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=embed-certs-101897
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_32_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:32:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-101897
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:34:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:34:01 +0000   Sat, 18 Oct 2025 10:31:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:34:01 +0000   Sat, 18 Oct 2025 10:31:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:34:01 +0000   Sat, 18 Oct 2025 10:31:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:34:01 +0000   Sat, 18 Oct 2025 10:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-101897
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ddfa9a95-8a31-40e5-b44e-f69ada911352
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-66bc5c9577-hxrmf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m28s
	  kube-system                 etcd-embed-certs-101897                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m34s
	  kube-system                 kindnet-qt6bn                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m28s
	  kube-system                 kube-apiserver-embed-certs-101897             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-controller-manager-embed-certs-101897    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-bp45x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-scheduler-embed-certs-101897             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rhqss    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7tlh9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m25s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Warning  CgroupV1                 2m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m44s (x8 over 2m44s)  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m44s (x8 over 2m44s)  kubelet          Node embed-certs-101897 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m44s (x8 over 2m44s)  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m33s                  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m33s                  kubelet          Node embed-certs-101897 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s                  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m29s                  node-controller  Node embed-certs-101897 event: Registered Node embed-certs-101897 in Controller
	  Normal   NodeReady                106s                   kubelet          Node embed-certs-101897 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node embed-certs-101897 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node embed-certs-101897 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node embed-certs-101897 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-101897 event: Registered Node embed-certs-101897 in Controller
	
	
	==> dmesg <==
	[Oct18 10:13] overlayfs: idmapped layers are currently not supported
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ea13a5fdbf596d27a2a9bdd7254f8af427b96bdad19fa1221e096954a6b07ec4] <==
	{"level":"warn","ts":"2025-10-18T10:33:38.895039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:38.911968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:38.930267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:38.980230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.016537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.035705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.087442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.138210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.149039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.172933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.190184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.205357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.241039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.266682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.296644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.305813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.330975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.346734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.370533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.390625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.425338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.458948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.488035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.509331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.569321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52390","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:34:41 up  2:17,  0 user,  load average: 4.31, 4.27, 3.30
	Linux embed-certs-101897 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [34ddaa028d2f62182e466e60e132fe6f57e28a5686a9fbc0662ab810e428fde4] <==
	I1018 10:33:42.626718       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:33:42.627096       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 10:33:42.711001       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:33:42.711091       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:33:42.711112       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:33:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:33:43.046879       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:33:43.046950       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:33:43.046967       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:33:43.048268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:34:13.047871       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 10:34:13.047874       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 10:34:13.048056       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:34:13.048078       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 10:34:14.647756       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:34:14.647790       1 metrics.go:72] Registering metrics
	I1018 10:34:14.647861       1 controller.go:711] "Syncing nftables rules"
	I1018 10:34:23.047208       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 10:34:23.047248       1 main.go:301] handling current node
	I1018 10:34:33.049276       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 10:34:33.049391       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98749e78e236d9c4ba517df85eb017b3e2daf5eb1d15c7618a96f229e9c048e9] <==
	I1018 10:33:41.015397       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:33:41.030719       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:33:41.032216       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 10:33:41.032393       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 10:33:41.032470       1 aggregator.go:171] initial CRD sync complete...
	I1018 10:33:41.032504       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 10:33:41.032533       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 10:33:41.032563       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:33:41.036067       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:33:41.044371       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 10:33:41.044669       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 10:33:41.046153       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 10:33:41.061157       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1018 10:33:41.131749       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 10:33:41.416542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:33:41.668665       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:33:43.197738       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:33:43.507372       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:33:43.624006       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:33:43.675994       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:33:44.028682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.163.255"}
	I1018 10:33:44.078241       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.219.55"}
	I1018 10:33:45.309601       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:33:45.643059       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:33:45.736424       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0a5b488299c29fa8c745b6bbd5d7b3db828119f52e047c424ea4b9156c222088] <==
	I1018 10:33:45.278300       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 10:33:45.279028       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:33:45.279216       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 10:33:45.279319       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:33:45.279464       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:33:45.279477       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 10:33:45.279501       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 10:33:45.284878       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:33:45.284922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 10:33:45.287864       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:33:45.295649       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:33:45.295782       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:33:45.297093       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:33:45.297558       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 10:33:45.297760       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 10:33:45.307362       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 10:33:45.307555       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 10:33:45.307704       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-101897"
	I1018 10:33:45.307794       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 10:33:45.321287       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:33:45.321475       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:33:45.328111       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:33:45.329797       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:33:45.330026       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 10:33:45.352105       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [73ffb19e0ddc3982b14bf8a1380764785f013d969977dd645a673cd8aef57ec1] <==
	I1018 10:33:43.327173       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:33:43.707389       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:33:43.814520       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:33:43.814581       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 10:33:43.814665       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:33:44.106193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:33:44.106314       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:33:44.113081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:33:44.113535       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:33:44.113776       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:33:44.125656       1 config.go:200] "Starting service config controller"
	I1018 10:33:44.129811       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:33:44.130328       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:33:44.130383       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:33:44.130424       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:33:44.130458       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:33:44.156064       1 config.go:309] "Starting node config controller"
	I1018 10:33:44.156104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:33:44.156113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:33:44.233926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:33:44.234148       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 10:33:44.234271       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ddb705e0f64d66513424efc45237983978c1000f91094a9731d126dd8cab8ac7] <==
	I1018 10:33:39.132780       1 serving.go:386] Generated self-signed cert in-memory
	W1018 10:33:40.921306       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 10:33:40.921376       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 10:33:40.921388       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 10:33:40.921394       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 10:33:41.052409       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:33:41.058605       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:33:41.073623       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:33:41.082065       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:33:41.082123       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:33:41.098570       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:33:41.303431       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: I1018 10:33:45.705054     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjmlk\" (UniqueName: \"kubernetes.io/projected/9eb1ff49-6a0f-4015-91cc-7fbd126b4adc-kube-api-access-jjmlk\") pod \"dashboard-metrics-scraper-6ffb444bf9-rhqss\" (UID: \"9eb1ff49-6a0f-4015-91cc-7fbd126b4adc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss"
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: I1018 10:33:45.705688     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9eb1ff49-6a0f-4015-91cc-7fbd126b4adc-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-rhqss\" (UID: \"9eb1ff49-6a0f-4015-91cc-7fbd126b4adc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss"
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: I1018 10:33:45.705832     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab31733e-962a-4dd9-9b6f-78be82a1d96b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-7tlh9\" (UID: \"ab31733e-962a-4dd9-9b6f-78be82a1d96b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7tlh9"
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: I1018 10:33:45.705952     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffd4\" (UniqueName: \"kubernetes.io/projected/ab31733e-962a-4dd9-9b6f-78be82a1d96b-kube-api-access-6ffd4\") pod \"kubernetes-dashboard-855c9754f9-7tlh9\" (UID: \"ab31733e-962a-4dd9-9b6f-78be82a1d96b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7tlh9"
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: W1018 10:33:45.898721     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/crio-2f09030753b5ec397bd14876bdb53897a9999e26f5faf40563a4109681788de6 WatchSource:0}: Error finding container 2f09030753b5ec397bd14876bdb53897a9999e26f5faf40563a4109681788de6: Status 404 returned error can't find the container with id 2f09030753b5ec397bd14876bdb53897a9999e26f5faf40563a4109681788de6
	Oct 18 10:33:46 embed-certs-101897 kubelet[780]: W1018 10:33:46.151247     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/crio-f08de577d29e7cbb3bc2c085a26e0a4e9596b3540bb953da831bbd589e5a9c70 WatchSource:0}: Error finding container f08de577d29e7cbb3bc2c085a26e0a4e9596b3540bb953da831bbd589e5a9c70: Status 404 returned error can't find the container with id f08de577d29e7cbb3bc2c085a26e0a4e9596b3540bb953da831bbd589e5a9c70
	Oct 18 10:33:58 embed-certs-101897 kubelet[780]: I1018 10:33:58.917545     780 scope.go:117] "RemoveContainer" containerID="16fbc7f8980066077d959a5961447cdbeabd3c47849069b014ba3615e59d3c95"
	Oct 18 10:33:58 embed-certs-101897 kubelet[780]: I1018 10:33:58.940315     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7tlh9" podStartSLOduration=6.307477175 podStartE2EDuration="13.940297373s" podCreationTimestamp="2025-10-18 10:33:45 +0000 UTC" firstStartedPulling="2025-10-18 10:33:45.90416233 +0000 UTC m=+14.673174775" lastFinishedPulling="2025-10-18 10:33:53.536982463 +0000 UTC m=+22.305994973" observedRunningTime="2025-10-18 10:33:53.93347001 +0000 UTC m=+22.702482446" watchObservedRunningTime="2025-10-18 10:33:58.940297373 +0000 UTC m=+27.709309810"
	Oct 18 10:33:59 embed-certs-101897 kubelet[780]: I1018 10:33:59.923374     780 scope.go:117] "RemoveContainer" containerID="16fbc7f8980066077d959a5961447cdbeabd3c47849069b014ba3615e59d3c95"
	Oct 18 10:33:59 embed-certs-101897 kubelet[780]: I1018 10:33:59.923541     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:33:59 embed-certs-101897 kubelet[780]: E1018 10:33:59.923712     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:00 embed-certs-101897 kubelet[780]: I1018 10:34:00.925747     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:34:00 embed-certs-101897 kubelet[780]: E1018 10:34:00.925907     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:06 embed-certs-101897 kubelet[780]: I1018 10:34:06.127236     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:34:06 embed-certs-101897 kubelet[780]: E1018 10:34:06.127438     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:12 embed-certs-101897 kubelet[780]: I1018 10:34:12.958862     780 scope.go:117] "RemoveContainer" containerID="1cd79be1ea9aff71ffca848e73458fa728047d78d2e68ae5aeb1565abb1f298c"
	Oct 18 10:34:20 embed-certs-101897 kubelet[780]: I1018 10:34:20.710233     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:34:20 embed-certs-101897 kubelet[780]: I1018 10:34:20.986152     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:34:20 embed-certs-101897 kubelet[780]: I1018 10:34:20.986569     780 scope.go:117] "RemoveContainer" containerID="0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc"
	Oct 18 10:34:20 embed-certs-101897 kubelet[780]: E1018 10:34:20.986766     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:26 embed-certs-101897 kubelet[780]: I1018 10:34:26.127845     780 scope.go:117] "RemoveContainer" containerID="0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc"
	Oct 18 10:34:26 embed-certs-101897 kubelet[780]: E1018 10:34:26.128060     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:38 embed-certs-101897 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:34:38 embed-certs-101897 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:34:38 embed-certs-101897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bedd687931a6c72eba28323577cb0d8ab111b649fbb82f440e5a34bc42246086] <==
	2025/10/18 10:33:53 Using namespace: kubernetes-dashboard
	2025/10/18 10:33:53 Using in-cluster config to connect to apiserver
	2025/10/18 10:33:53 Using secret token for csrf signing
	2025/10/18 10:33:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 10:33:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 10:33:53 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 10:33:53 Generating JWE encryption key
	2025/10/18 10:33:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 10:33:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 10:33:54 Initializing JWE encryption key from synchronized object
	2025/10/18 10:33:54 Creating in-cluster Sidecar client
	2025/10/18 10:33:54 Serving insecurely on HTTP port: 9090
	2025/10/18 10:33:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:34:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:33:53 Starting overwatch
	
	
	==> storage-provisioner [1cd79be1ea9aff71ffca848e73458fa728047d78d2e68ae5aeb1565abb1f298c] <==
	I1018 10:33:42.700330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 10:34:12.926556       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [82aa17e63acc5c9ee30d47b290ef374b230450f3fa79a6048d75d01c95af1229] <==
	I1018 10:34:13.023230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:34:13.023296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 10:34:13.026227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:16.481731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:20.743627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:24.342294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:27.396530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:30.421890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:30.427553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:34:30.427756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:34:30.427955       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-101897_f3c57c77-aa3c-49b5-92ff-36fd8797c468!
	I1018 10:34:30.428125       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3b672bc-ac74-4ae1-9e75-a8332f5a8fca", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-101897_f3c57c77-aa3c-49b5-92ff-36fd8797c468 became leader
	W1018 10:34:30.432927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:30.440127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:34:30.533260       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-101897_f3c57c77-aa3c-49b5-92ff-36fd8797c468!
	W1018 10:34:32.448910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:32.455776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:34.459471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:34.464213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:36.467209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:36.471293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:38.477020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:38.489797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:40.494395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:40.501491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-101897 -n embed-certs-101897
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-101897 -n embed-certs-101897: exit status 2 (507.922171ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-101897 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-101897
helpers_test.go:243: (dbg) docker inspect embed-certs-101897:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6",
	        "Created": "2025-10-18T10:31:37.027393759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482913,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:33:22.813067208Z",
	            "FinishedAt": "2025-10-18T10:33:21.827772315Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/hosts",
	        "LogPath": "/var/lib/docker/containers/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6-json.log",
	        "Name": "/embed-certs-101897",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-101897:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-101897",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6",
	                "LowerDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23e1fcb97e4afbd4d0b2f645ecf5499e46c462cdb419f43fa66b0ff224da5d89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-101897",
	                "Source": "/var/lib/docker/volumes/embed-certs-101897/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-101897",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-101897",
	                "name.minikube.sigs.k8s.io": "embed-certs-101897",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e1e7ea96b603c15490122dae014747cc570a41918d84f3ef43639b19011b9f69",
	            "SandboxKey": "/var/run/docker/netns/e1e7ea96b603",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-101897": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:71:43:ac:c6:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6f55db3cc24016b7f9a5c2a2cb317625e0e1d0053a68e0f05bbc6f3ae8ab71a",
	                    "EndpointID": "80c7bf53cacd944208bec54baf5dce39e5e411fbe69c0a0cc1bcc27c05e3bd8c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-101897",
	                        "a8859be818ee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-101897 -n embed-certs-101897
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-101897 -n embed-certs-101897: exit status 2 (716.090627ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-101897 logs -n 25
E1018 10:34:44.361572  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:34:44.368186  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:34:44.379669  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:34:44.400985  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:34:44.442806  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:34:44.524107  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:34:44.685456  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:34:45.007319  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:34:45.648994  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-101897 logs -n 25: (2.099573157s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:30 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-309062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-309062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-733799                                                                                                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-715182 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-101897 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-715182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-101897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-715182 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-922359                                                                                                                                                                                                               │ disable-driver-mounts-922359 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ image   │ embed-certs-101897 image list --format=json                                                                                                                                                                                                   │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p embed-certs-101897 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:34:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:34:31.678391  487845 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:34:31.678508  487845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:31.678514  487845 out.go:374] Setting ErrFile to fd 2...
	I1018 10:34:31.678519  487845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:31.678909  487845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:34:31.679470  487845 out.go:368] Setting JSON to false
	I1018 10:34:31.680415  487845 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8222,"bootTime":1760775450,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:34:31.680501  487845 start.go:141] virtualization:  
	I1018 10:34:31.687718  487845 out.go:179] * [no-preload-027087] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:34:31.691490  487845 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:34:31.691568  487845 notify.go:220] Checking for updates...
	I1018 10:34:31.699014  487845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:34:31.702334  487845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:34:31.705646  487845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:34:31.708821  487845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:34:31.712930  487845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:34:31.716820  487845 config.go:182] Loaded profile config "embed-certs-101897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:34:31.717018  487845 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:34:31.748966  487845 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:34:31.749095  487845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:34:31.840263  487845 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:34:31.828996055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:34:31.840377  487845 docker.go:318] overlay module found
	I1018 10:34:31.843473  487845 out.go:179] * Using the docker driver based on user configuration
	I1018 10:34:31.846715  487845 start.go:305] selected driver: docker
	I1018 10:34:31.846771  487845 start.go:925] validating driver "docker" against <nil>
	I1018 10:34:31.846821  487845 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:34:31.849088  487845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:34:31.931142  487845 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:34:31.919236549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:34:31.931316  487845 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 10:34:31.931553  487845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:34:31.934516  487845 out.go:179] * Using Docker driver with root privileges
	I1018 10:34:31.937391  487845 cni.go:84] Creating CNI manager for ""
	I1018 10:34:31.937479  487845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:34:31.937496  487845 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 10:34:31.937580  487845 start.go:349] cluster config:
	{Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:34:31.940865  487845 out.go:179] * Starting "no-preload-027087" primary control-plane node in "no-preload-027087" cluster
	I1018 10:34:31.944583  487845 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:34:31.947581  487845 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:34:31.950553  487845 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:34:31.950631  487845 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:34:31.950955  487845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/config.json ...
	I1018 10:34:31.950988  487845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/config.json: {Name:mkcd35f6ee370444afb52d25334f8712a4892472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:34:31.954380  487845 cache.go:107] acquiring lock: {Name:mkaf3d4648d07ea61f5c43b4ac6cff6e96e07d0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.954517  487845 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 10:34:31.954531  487845 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.282311ms
	I1018 10:34:31.954623  487845 cache.go:107] acquiring lock: {Name:mkce90ae98faaf046844c77feccd02a8c89b22bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.955546  487845 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 10:34:31.954651  487845 cache.go:107] acquiring lock: {Name:mkaa713f6c6c749f7890994ea47ccb489ab7b76a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.955671  487845 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 10:34:31.955759  487845 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 10:34:31.954672  487845 cache.go:107] acquiring lock: {Name:mkbf154924b5d05f1add0f80d2d8992cab46ca22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956105  487845 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 10:34:31.954689  487845 cache.go:107] acquiring lock: {Name:mk7c500c022aee187177cdcb3e6cd138895cc689 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956248  487845 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 10:34:31.954706  487845 cache.go:107] acquiring lock: {Name:mk8d87cb313c81485b1cabba19862a22e85903db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956410  487845 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 10:34:31.954730  487845 cache.go:107] acquiring lock: {Name:mkf60d23fd6f24668b2e7aa1b277366e0a8c4f15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956589  487845 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 10:34:31.954746  487845 cache.go:107] acquiring lock: {Name:mk79330e484fcb6a5af61229914c16bea91c5633 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.956775  487845 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 10:34:31.958664  487845 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 10:34:31.958825  487845 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 10:34:31.960599  487845 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 10:34:31.960853  487845 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 10:34:31.961661  487845 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 10:34:31.962655  487845 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 10:34:31.962750  487845 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 10:34:31.978062  487845 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:34:31.978087  487845 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:34:31.978100  487845 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:34:31.978123  487845 start.go:360] acquireMachinesLock for no-preload-027087: {Name:mk3407a2c92d7e64b372433da7fc52893eca365e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:31.978235  487845 start.go:364] duration metric: took 90.347µs to acquireMachinesLock for "no-preload-027087"
	I1018 10:34:31.978268  487845 start.go:93] Provisioning new machine with config: &{Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:34:31.978342  487845 start.go:125] createHost starting for "" (driver="docker")
	I1018 10:34:31.982007  487845 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:34:31.982290  487845 start.go:159] libmachine.API.Create for "no-preload-027087" (driver="docker")
	I1018 10:34:31.982336  487845 client.go:168] LocalClient.Create starting
	I1018 10:34:31.982422  487845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:34:31.982460  487845 main.go:141] libmachine: Decoding PEM data...
	I1018 10:34:31.982477  487845 main.go:141] libmachine: Parsing certificate...
	I1018 10:34:31.982541  487845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:34:31.982563  487845 main.go:141] libmachine: Decoding PEM data...
	I1018 10:34:31.982574  487845 main.go:141] libmachine: Parsing certificate...
	I1018 10:34:31.982942  487845 cli_runner.go:164] Run: docker network inspect no-preload-027087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:34:32.005648  487845 cli_runner.go:211] docker network inspect no-preload-027087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:34:32.005760  487845 network_create.go:284] running [docker network inspect no-preload-027087] to gather additional debugging logs...
	I1018 10:34:32.005786  487845 cli_runner.go:164] Run: docker network inspect no-preload-027087
	W1018 10:34:32.024079  487845 cli_runner.go:211] docker network inspect no-preload-027087 returned with exit code 1
	I1018 10:34:32.024109  487845 network_create.go:287] error running [docker network inspect no-preload-027087]: docker network inspect no-preload-027087: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-027087 not found
	I1018 10:34:32.024122  487845 network_create.go:289] output of [docker network inspect no-preload-027087]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-027087 not found
	
	** /stderr **
	I1018 10:34:32.024228  487845 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:34:32.043427  487845 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:34:32.043710  487845 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:34:32.044067  487845 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:34:32.044592  487845 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c8a070}
	I1018 10:34:32.044652  487845 network_create.go:124] attempt to create docker network no-preload-027087 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 10:34:32.044743  487845 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-027087 no-preload-027087
	I1018 10:34:32.122836  487845 network_create.go:108] docker network no-preload-027087 192.168.76.0/24 created
	I1018 10:34:32.122885  487845 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-027087" container
	I1018 10:34:32.122983  487845 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:34:32.140807  487845 cli_runner.go:164] Run: docker volume create no-preload-027087 --label name.minikube.sigs.k8s.io=no-preload-027087 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:34:32.158612  487845 oci.go:103] Successfully created a docker volume no-preload-027087
	I1018 10:34:32.158704  487845 cli_runner.go:164] Run: docker run --rm --name no-preload-027087-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-027087 --entrypoint /usr/bin/test -v no-preload-027087:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 10:34:32.300803  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 10:34:32.300866  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 10:34:32.301603  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 10:34:32.316247  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 10:34:32.318475  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 10:34:32.322078  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 10:34:32.336933  487845 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 10:34:32.363950  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 10:34:32.364007  487845 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 409.275131ms
	I1018 10:34:32.364043  487845 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 10:34:32.729121  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 10:34:32.729155  487845 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 774.534688ms
	I1018 10:34:32.729214  487845 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 10:34:32.836214  487845 oci.go:107] Successfully prepared a docker volume no-preload-027087
	I1018 10:34:32.836251  487845 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1018 10:34:32.836416  487845 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:34:32.836537  487845 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:34:32.893541  487845 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-027087 --name no-preload-027087 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-027087 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-027087 --network no-preload-027087 --ip 192.168.76.2 --volume no-preload-027087:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:34:33.213210  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 10:34:33.213255  487845 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.258565102s
	I1018 10:34:33.213270  487845 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 10:34:33.314923  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 10:34:33.314947  487845 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.360198862s
	I1018 10:34:33.314958  487845 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 10:34:33.325542  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 10:34:33.325567  487845 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.370895082s
	I1018 10:34:33.325579  487845 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 10:34:33.339418  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 10:34:33.339443  487845 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.384791446s
	I1018 10:34:33.339454  487845 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 10:34:33.402553  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Running}}
	I1018 10:34:33.436308  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:34:33.487021  487845 cli_runner.go:164] Run: docker exec no-preload-027087 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:34:33.562101  487845 oci.go:144] the created container "no-preload-027087" has a running status.
	I1018 10:34:33.562141  487845 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa...
	I1018 10:34:34.231642  487845 cache.go:157] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 10:34:34.231674  487845 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.276966792s
	I1018 10:34:34.231708  487845 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 10:34:34.231725  487845 cache.go:87] Successfully saved all images to host disk.
	I1018 10:34:34.334061  487845 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:34:34.352249  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:34:34.371045  487845 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:34:34.371063  487845 kic_runner.go:114] Args: [docker exec --privileged no-preload-027087 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:34:34.411450  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:34:34.429002  487845 machine.go:93] provisionDockerMachine start ...
	I1018 10:34:34.429285  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:34.447342  487845 main.go:141] libmachine: Using SSH client type: native
	I1018 10:34:34.447678  487845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1018 10:34:34.447702  487845 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:34:34.448474  487845 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 10:34:37.624915  487845 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-027087
	
	I1018 10:34:37.624939  487845 ubuntu.go:182] provisioning hostname "no-preload-027087"
	I1018 10:34:37.625011  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:37.643683  487845 main.go:141] libmachine: Using SSH client type: native
	I1018 10:34:37.644011  487845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1018 10:34:37.644027  487845 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-027087 && echo "no-preload-027087" | sudo tee /etc/hostname
	I1018 10:34:37.821804  487845 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-027087
	
	I1018 10:34:37.821877  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:37.852608  487845 main.go:141] libmachine: Using SSH client type: native
	I1018 10:34:37.852923  487845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1018 10:34:37.852940  487845 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-027087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-027087/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-027087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:34:38.018908  487845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:34:38.018933  487845 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:34:38.018951  487845 ubuntu.go:190] setting up certificates
	I1018 10:34:38.018962  487845 provision.go:84] configureAuth start
	I1018 10:34:38.019032  487845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-027087
	I1018 10:34:38.042718  487845 provision.go:143] copyHostCerts
	I1018 10:34:38.042802  487845 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:34:38.042812  487845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:34:38.042894  487845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:34:38.043017  487845 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:34:38.043023  487845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:34:38.043052  487845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:34:38.043111  487845 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:34:38.043115  487845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:34:38.043139  487845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:34:38.043184  487845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.no-preload-027087 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-027087]
	I1018 10:34:38.576905  487845 provision.go:177] copyRemoteCerts
	I1018 10:34:38.576984  487845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:34:38.577058  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:38.595086  487845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:34:38.697530  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:34:38.718465  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 10:34:38.737256  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:34:38.759141  487845 provision.go:87] duration metric: took 740.154267ms to configureAuth
	I1018 10:34:38.759170  487845 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:34:38.759364  487845 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:34:38.759474  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:38.777729  487845 main.go:141] libmachine: Using SSH client type: native
	I1018 10:34:38.778040  487845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1018 10:34:38.778060  487845 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:34:39.164604  487845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:34:39.164623  487845 machine.go:96] duration metric: took 4.73560019s to provisionDockerMachine
	I1018 10:34:39.164632  487845 client.go:171] duration metric: took 7.182285296s to LocalClient.Create
	I1018 10:34:39.164646  487845 start.go:167] duration metric: took 7.182358535s to libmachine.API.Create "no-preload-027087"
	I1018 10:34:39.164653  487845 start.go:293] postStartSetup for "no-preload-027087" (driver="docker")
	I1018 10:34:39.164668  487845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:34:39.164729  487845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:34:39.164769  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:39.188708  487845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:34:39.297585  487845 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:34:39.301343  487845 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:34:39.301373  487845 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:34:39.301384  487845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:34:39.301448  487845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:34:39.301528  487845 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:34:39.301633  487845 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:34:39.309484  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:34:39.328445  487845 start.go:296] duration metric: took 163.776471ms for postStartSetup
	I1018 10:34:39.328856  487845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-027087
	I1018 10:34:39.347050  487845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/config.json ...
	I1018 10:34:39.347364  487845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:34:39.347420  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:39.367875  487845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:34:39.474508  487845 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:34:39.479397  487845 start.go:128] duration metric: took 7.501039998s to createHost
	I1018 10:34:39.479421  487845 start.go:83] releasing machines lock for "no-preload-027087", held for 7.501171593s
	I1018 10:34:39.479492  487845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-027087
	I1018 10:34:39.499456  487845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:34:39.499537  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:39.499456  487845 ssh_runner.go:195] Run: cat /version.json
	I1018 10:34:39.499637  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:34:39.524012  487845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:34:39.536372  487845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:34:39.748621  487845 ssh_runner.go:195] Run: systemctl --version
	I1018 10:34:39.756413  487845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:34:39.805572  487845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:34:39.810642  487845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:34:39.810711  487845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:34:39.848489  487845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:34:39.848516  487845 start.go:495] detecting cgroup driver to use...
	I1018 10:34:39.848552  487845 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:34:39.848606  487845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:34:39.873001  487845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:34:39.889264  487845 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:34:39.889363  487845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:34:39.909987  487845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:34:39.930846  487845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:34:40.132209  487845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:34:40.307739  487845 docker.go:234] disabling docker service ...
	I1018 10:34:40.307804  487845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:34:40.332764  487845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:34:40.348023  487845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:34:40.537438  487845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:34:40.729367  487845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:34:40.746273  487845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:34:40.772233  487845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:34:40.772303  487845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:34:40.784161  487845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:34:40.784225  487845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:34:40.794274  487845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:34:40.806958  487845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:34:40.821680  487845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:34:40.830567  487845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:34:40.842646  487845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:34:40.858040  487845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:34:40.867681  487845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:34:40.877226  487845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:34:40.886090  487845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:34:41.045386  487845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:34:41.211332  487845 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:34:41.211426  487845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:34:41.216514  487845 start.go:563] Will wait 60s for crictl version
	I1018 10:34:41.216614  487845 ssh_runner.go:195] Run: which crictl
	I1018 10:34:41.222332  487845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:34:41.267218  487845 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:34:41.267349  487845 ssh_runner.go:195] Run: crio --version
	I1018 10:34:41.308154  487845 ssh_runner.go:195] Run: crio --version
	I1018 10:34:41.353322  487845 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:34:41.356219  487845 cli_runner.go:164] Run: docker network inspect no-preload-027087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:34:41.375921  487845 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:34:41.380440  487845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:34:41.390795  487845 kubeadm.go:883] updating cluster {Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:34:41.390904  487845 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:34:41.390949  487845 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:34:41.432362  487845 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 10:34:41.432392  487845 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1018 10:34:41.432432  487845 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:34:41.432647  487845 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 10:34:41.432761  487845 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 10:34:41.432852  487845 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 10:34:41.432940  487845 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 10:34:41.433033  487845 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 10:34:41.433121  487845 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 10:34:41.433239  487845 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 10:34:41.434505  487845 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 10:34:41.434758  487845 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 10:34:41.434920  487845 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 10:34:41.435061  487845 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 10:34:41.435197  487845 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 10:34:41.435334  487845 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 10:34:41.435363  487845 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:34:41.436255  487845 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 10:34:41.670733  487845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1018 10:34:41.677776  487845 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	
	
	==> CRI-O <==
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.710841599Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67d15e67-5c33-4b10-a256-e5b1fabdc937 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.712187649Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5637eb54-b4fe-4ef5-828b-1f1ead484acb name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.713266406Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss/dashboard-metrics-scraper" id=8b86da07-c51f-4476-bfba-481fe11126f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.713505161Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.722296768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.72291481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.738433458Z" level=info msg="Created container 0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss/dashboard-metrics-scraper" id=8b86da07-c51f-4476-bfba-481fe11126f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.741451207Z" level=info msg="Starting container: 0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc" id=0009b883-9e2f-49c6-b8fa-5e6bd91ac652 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.746161397Z" level=info msg="Started container" PID=1645 containerID=0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss/dashboard-metrics-scraper id=0009b883-9e2f-49c6-b8fa-5e6bd91ac652 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f08de577d29e7cbb3bc2c085a26e0a4e9596b3540bb953da831bbd589e5a9c70
	Oct 18 10:34:20 embed-certs-101897 conmon[1643]: conmon 0f2ffe2bb4ec0f77d67b <ninfo>: container 1645 exited with status 1
	Oct 18 10:34:20 embed-certs-101897 crio[651]: time="2025-10-18T10:34:20.992187343Z" level=info msg="Removing container: ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f" id=dae8c2eb-c7ce-49e5-8637-c83638142a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:21 embed-certs-101897 crio[651]: time="2025-10-18T10:34:21.002517051Z" level=info msg="Error loading conmon cgroup of container ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f: cgroup deleted" id=dae8c2eb-c7ce-49e5-8637-c83638142a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:21 embed-certs-101897 crio[651]: time="2025-10-18T10:34:21.00863895Z" level=info msg="Removed container ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss/dashboard-metrics-scraper" id=dae8c2eb-c7ce-49e5-8637-c83638142a28 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.047514897Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.052151643Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.052188558Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.052212066Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.055484193Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.055639969Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.055721062Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.059008483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.059164259Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.059236236Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.062507427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:34:23 embed-certs-101897 crio[651]: time="2025-10-18T10:34:23.062546706Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0f2ffe2bb4ec0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   f08de577d29e7       dashboard-metrics-scraper-6ffb444bf9-rhqss   kubernetes-dashboard
	82aa17e63acc5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           32 seconds ago       Running             storage-provisioner         2                   cd187e89281dc       storage-provisioner                          kube-system
	bedd687931a6c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   51 seconds ago       Running             kubernetes-dashboard        0                   2f09030753b5e       kubernetes-dashboard-855c9754f9-7tlh9        kubernetes-dashboard
	444cf18cb855a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   d1fba4e0b27f0       coredns-66bc5c9577-hxrmf                     kube-system
	73ffb19e0ddc3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   a7ff423efb986       kube-proxy-bp45x                             kube-system
	0b098b4bb6d10       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   2d2402090cade       busybox                                      default
	34ddaa028d2f6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   da408a3b35ca0       kindnet-qt6bn                                kube-system
	1cd79be1ea9af       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   cd187e89281dc       storage-provisioner                          kube-system
	0a5b488299c29       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   46e087113e954       kube-controller-manager-embed-certs-101897   kube-system
	ddb705e0f64d6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   bb5a370bd9bc8       kube-scheduler-embed-certs-101897            kube-system
	ea13a5fdbf596       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   af3a372759917       etcd-embed-certs-101897                      kube-system
	98749e78e236d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c7b66600ddd7d       kube-apiserver-embed-certs-101897            kube-system
	
	
	==> coredns [444cf18cb855af9d1e68665dcc30cb1b65a4ea7542eeb0eca0c74e9c0eb2d3ff] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50469 - 65026 "HINFO IN 306567304835938979.3027967906319139934. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.039085312s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-101897
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-101897
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=embed-certs-101897
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_32_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:32:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-101897
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:34:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:34:01 +0000   Sat, 18 Oct 2025 10:31:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:34:01 +0000   Sat, 18 Oct 2025 10:31:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:34:01 +0000   Sat, 18 Oct 2025 10:31:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:34:01 +0000   Sat, 18 Oct 2025 10:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-101897
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ddfa9a95-8a31-40e5-b44e-f69ada911352
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 coredns-66bc5c9577-hxrmf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m32s
	  kube-system                 etcd-embed-certs-101897                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m38s
	  kube-system                 kindnet-qt6bn                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m32s
	  kube-system                 kube-apiserver-embed-certs-101897             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-controller-manager-embed-certs-101897    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-proxy-bp45x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-scheduler-embed-certs-101897             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rhqss    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7tlh9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m29s                  kube-proxy       
	  Normal   Starting                 61s                    kube-proxy       
	  Warning  CgroupV1                 2m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node embed-certs-101897 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x8 over 2m48s)  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m37s                  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m37s                  kubelet          Node embed-certs-101897 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s                  kubelet          Node embed-certs-101897 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m33s                  node-controller  Node embed-certs-101897 event: Registered Node embed-certs-101897 in Controller
	  Normal   NodeReady                110s                   kubelet          Node embed-certs-101897 status is now: NodeReady
	  Normal   Starting                 74s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 74s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)      kubelet          Node embed-certs-101897 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)      kubelet          Node embed-certs-101897 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x8 over 74s)      kubelet          Node embed-certs-101897 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                    node-controller  Node embed-certs-101897 event: Registered Node embed-certs-101897 in Controller
	
	
	==> dmesg <==
	[Oct18 10:14] overlayfs: idmapped layers are currently not supported
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ea13a5fdbf596d27a2a9bdd7254f8af427b96bdad19fa1221e096954a6b07ec4] <==
	{"level":"warn","ts":"2025-10-18T10:33:38.895039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:38.911968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:38.930267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:38.980230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.016537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.035705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.087442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.138210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.149039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.172933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.190184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.205357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.241039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.266682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.296644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.305813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.330975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.346734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.370533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.390625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.425338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.458948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.488035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.509331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:33:39.569321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52390","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:34:45 up  2:17,  0 user,  load average: 4.31, 4.27, 3.30
	Linux embed-certs-101897 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [34ddaa028d2f62182e466e60e132fe6f57e28a5686a9fbc0662ab810e428fde4] <==
	I1018 10:33:42.626718       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:33:42.627096       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 10:33:42.711001       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:33:42.711091       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:33:42.711112       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:33:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:33:43.046879       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:33:43.046950       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:33:43.046967       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:33:43.048268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:34:13.047871       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 10:34:13.047874       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 10:34:13.048056       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:34:13.048078       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 10:34:14.647756       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:34:14.647790       1 metrics.go:72] Registering metrics
	I1018 10:34:14.647861       1 controller.go:711] "Syncing nftables rules"
	I1018 10:34:23.047208       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 10:34:23.047248       1 main.go:301] handling current node
	I1018 10:34:33.049276       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 10:34:33.049391       1 main.go:301] handling current node
	I1018 10:34:43.054013       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 10:34:43.054122       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98749e78e236d9c4ba517df85eb017b3e2daf5eb1d15c7618a96f229e9c048e9] <==
	I1018 10:33:41.015397       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:33:41.030719       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:33:41.032216       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 10:33:41.032393       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 10:33:41.032470       1 aggregator.go:171] initial CRD sync complete...
	I1018 10:33:41.032504       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 10:33:41.032533       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 10:33:41.032563       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:33:41.036067       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:33:41.044371       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 10:33:41.044669       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 10:33:41.046153       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 10:33:41.061157       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1018 10:33:41.131749       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 10:33:41.416542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:33:41.668665       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:33:43.197738       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:33:43.507372       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:33:43.624006       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:33:43.675994       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:33:44.028682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.163.255"}
	I1018 10:33:44.078241       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.219.55"}
	I1018 10:33:45.309601       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:33:45.643059       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:33:45.736424       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0a5b488299c29fa8c745b6bbd5d7b3db828119f52e047c424ea4b9156c222088] <==
	I1018 10:33:45.278300       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 10:33:45.279028       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:33:45.279216       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 10:33:45.279319       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:33:45.279464       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:33:45.279477       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 10:33:45.279501       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 10:33:45.284878       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:33:45.284922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 10:33:45.287864       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:33:45.295649       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:33:45.295782       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:33:45.297093       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:33:45.297558       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 10:33:45.297760       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 10:33:45.307362       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 10:33:45.307555       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 10:33:45.307704       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-101897"
	I1018 10:33:45.307794       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 10:33:45.321287       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:33:45.321475       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:33:45.328111       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:33:45.329797       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:33:45.330026       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 10:33:45.352105       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [73ffb19e0ddc3982b14bf8a1380764785f013d969977dd645a673cd8aef57ec1] <==
	I1018 10:33:43.327173       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:33:43.707389       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:33:43.814520       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:33:43.814581       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 10:33:43.814665       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:33:44.106193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:33:44.106314       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:33:44.113081       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:33:44.113535       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:33:44.113776       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:33:44.125656       1 config.go:200] "Starting service config controller"
	I1018 10:33:44.129811       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:33:44.130328       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:33:44.130383       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:33:44.130424       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:33:44.130458       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:33:44.156064       1 config.go:309] "Starting node config controller"
	I1018 10:33:44.156104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:33:44.156113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:33:44.233926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:33:44.234148       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 10:33:44.234271       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ddb705e0f64d66513424efc45237983978c1000f91094a9731d126dd8cab8ac7] <==
	I1018 10:33:39.132780       1 serving.go:386] Generated self-signed cert in-memory
	W1018 10:33:40.921306       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 10:33:40.921376       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 10:33:40.921388       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 10:33:40.921394       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 10:33:41.052409       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:33:41.058605       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:33:41.073623       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:33:41.082065       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:33:41.082123       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:33:41.098570       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:33:41.303431       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: I1018 10:33:45.705054     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjmlk\" (UniqueName: \"kubernetes.io/projected/9eb1ff49-6a0f-4015-91cc-7fbd126b4adc-kube-api-access-jjmlk\") pod \"dashboard-metrics-scraper-6ffb444bf9-rhqss\" (UID: \"9eb1ff49-6a0f-4015-91cc-7fbd126b4adc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss"
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: I1018 10:33:45.705688     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9eb1ff49-6a0f-4015-91cc-7fbd126b4adc-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-rhqss\" (UID: \"9eb1ff49-6a0f-4015-91cc-7fbd126b4adc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss"
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: I1018 10:33:45.705832     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab31733e-962a-4dd9-9b6f-78be82a1d96b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-7tlh9\" (UID: \"ab31733e-962a-4dd9-9b6f-78be82a1d96b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7tlh9"
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: I1018 10:33:45.705952     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffd4\" (UniqueName: \"kubernetes.io/projected/ab31733e-962a-4dd9-9b6f-78be82a1d96b-kube-api-access-6ffd4\") pod \"kubernetes-dashboard-855c9754f9-7tlh9\" (UID: \"ab31733e-962a-4dd9-9b6f-78be82a1d96b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7tlh9"
	Oct 18 10:33:45 embed-certs-101897 kubelet[780]: W1018 10:33:45.898721     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/crio-2f09030753b5ec397bd14876bdb53897a9999e26f5faf40563a4109681788de6 WatchSource:0}: Error finding container 2f09030753b5ec397bd14876bdb53897a9999e26f5faf40563a4109681788de6: Status 404 returned error can't find the container with id 2f09030753b5ec397bd14876bdb53897a9999e26f5faf40563a4109681788de6
	Oct 18 10:33:46 embed-certs-101897 kubelet[780]: W1018 10:33:46.151247     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a8859be818ee8aa9cd98f715f1bce9575850593c29841d6db7907c8a847f2fa6/crio-f08de577d29e7cbb3bc2c085a26e0a4e9596b3540bb953da831bbd589e5a9c70 WatchSource:0}: Error finding container f08de577d29e7cbb3bc2c085a26e0a4e9596b3540bb953da831bbd589e5a9c70: Status 404 returned error can't find the container with id f08de577d29e7cbb3bc2c085a26e0a4e9596b3540bb953da831bbd589e5a9c70
	Oct 18 10:33:58 embed-certs-101897 kubelet[780]: I1018 10:33:58.917545     780 scope.go:117] "RemoveContainer" containerID="16fbc7f8980066077d959a5961447cdbeabd3c47849069b014ba3615e59d3c95"
	Oct 18 10:33:58 embed-certs-101897 kubelet[780]: I1018 10:33:58.940315     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7tlh9" podStartSLOduration=6.307477175 podStartE2EDuration="13.940297373s" podCreationTimestamp="2025-10-18 10:33:45 +0000 UTC" firstStartedPulling="2025-10-18 10:33:45.90416233 +0000 UTC m=+14.673174775" lastFinishedPulling="2025-10-18 10:33:53.536982463 +0000 UTC m=+22.305994973" observedRunningTime="2025-10-18 10:33:53.93347001 +0000 UTC m=+22.702482446" watchObservedRunningTime="2025-10-18 10:33:58.940297373 +0000 UTC m=+27.709309810"
	Oct 18 10:33:59 embed-certs-101897 kubelet[780]: I1018 10:33:59.923374     780 scope.go:117] "RemoveContainer" containerID="16fbc7f8980066077d959a5961447cdbeabd3c47849069b014ba3615e59d3c95"
	Oct 18 10:33:59 embed-certs-101897 kubelet[780]: I1018 10:33:59.923541     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:33:59 embed-certs-101897 kubelet[780]: E1018 10:33:59.923712     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:00 embed-certs-101897 kubelet[780]: I1018 10:34:00.925747     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:34:00 embed-certs-101897 kubelet[780]: E1018 10:34:00.925907     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:06 embed-certs-101897 kubelet[780]: I1018 10:34:06.127236     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:34:06 embed-certs-101897 kubelet[780]: E1018 10:34:06.127438     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:12 embed-certs-101897 kubelet[780]: I1018 10:34:12.958862     780 scope.go:117] "RemoveContainer" containerID="1cd79be1ea9aff71ffca848e73458fa728047d78d2e68ae5aeb1565abb1f298c"
	Oct 18 10:34:20 embed-certs-101897 kubelet[780]: I1018 10:34:20.710233     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:34:20 embed-certs-101897 kubelet[780]: I1018 10:34:20.986152     780 scope.go:117] "RemoveContainer" containerID="ef572c24d810b8b43389b43b81ce70d3ee8fb93158b470531dc2b64b161c994f"
	Oct 18 10:34:20 embed-certs-101897 kubelet[780]: I1018 10:34:20.986569     780 scope.go:117] "RemoveContainer" containerID="0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc"
	Oct 18 10:34:20 embed-certs-101897 kubelet[780]: E1018 10:34:20.986766     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:26 embed-certs-101897 kubelet[780]: I1018 10:34:26.127845     780 scope.go:117] "RemoveContainer" containerID="0f2ffe2bb4ec0f77d67b1c87c811d1147b9404af0c12ff58c54e7202460bcecc"
	Oct 18 10:34:26 embed-certs-101897 kubelet[780]: E1018 10:34:26.128060     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhqss_kubernetes-dashboard(9eb1ff49-6a0f-4015-91cc-7fbd126b4adc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhqss" podUID="9eb1ff49-6a0f-4015-91cc-7fbd126b4adc"
	Oct 18 10:34:38 embed-certs-101897 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:34:38 embed-certs-101897 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:34:38 embed-certs-101897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bedd687931a6c72eba28323577cb0d8ab111b649fbb82f440e5a34bc42246086] <==
	2025/10/18 10:33:53 Using namespace: kubernetes-dashboard
	2025/10/18 10:33:53 Using in-cluster config to connect to apiserver
	2025/10/18 10:33:53 Using secret token for csrf signing
	2025/10/18 10:33:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 10:33:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 10:33:53 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 10:33:53 Generating JWE encryption key
	2025/10/18 10:33:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 10:33:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 10:33:54 Initializing JWE encryption key from synchronized object
	2025/10/18 10:33:54 Creating in-cluster Sidecar client
	2025/10/18 10:33:54 Serving insecurely on HTTP port: 9090
	2025/10/18 10:33:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:34:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:33:53 Starting overwatch
	
	
	==> storage-provisioner [1cd79be1ea9aff71ffca848e73458fa728047d78d2e68ae5aeb1565abb1f298c] <==
	I1018 10:33:42.700330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 10:34:12.926556       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [82aa17e63acc5c9ee30d47b290ef374b230450f3fa79a6048d75d01c95af1229] <==
	W1018 10:34:20.743627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:24.342294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:27.396530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:30.421890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:30.427553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:34:30.427756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:34:30.427955       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-101897_f3c57c77-aa3c-49b5-92ff-36fd8797c468!
	I1018 10:34:30.428125       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3b672bc-ac74-4ae1-9e75-a8332f5a8fca", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-101897_f3c57c77-aa3c-49b5-92ff-36fd8797c468 became leader
	W1018 10:34:30.432927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:30.440127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:34:30.533260       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-101897_f3c57c77-aa3c-49b5-92ff-36fd8797c468!
	W1018 10:34:32.448910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:32.455776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:34.459471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:34.464213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:36.467209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:36.471293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:38.477020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:38.489797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:40.494395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:40.501491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:42.504049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:42.516788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:44.521110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:34:44.535736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-101897 -n embed-certs-101897
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-101897 -n embed-certs-101897: exit status 2 (466.116781ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-101897 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-577403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-577403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (332.112141ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:35:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-577403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-577403
helpers_test.go:243: (dbg) docker inspect newest-cni-577403:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07",
	        "Created": "2025-10-18T10:34:57.122600154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 491813,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:34:57.283581453Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/hosts",
	        "LogPath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07-json.log",
	        "Name": "/newest-cni-577403",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-577403:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-577403",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07",
	                "LowerDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-577403",
	                "Source": "/var/lib/docker/volumes/newest-cni-577403/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-577403",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-577403",
	                "name.minikube.sigs.k8s.io": "newest-cni-577403",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c85a77b479a5dda684fcc1c6a1821eed826688810c0b59e3026618c95d62650d",
	            "SandboxKey": "/var/run/docker/netns/c85a77b479a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-577403": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:eb:0f:24:ca:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4944dc29f48a85d80603faba3e0eb9e1b1723b9d4244f496af940a2c5ae27592",
	                    "EndpointID": "0925ef354310951695699e4716c04330c3593490e8321e83016e8b4aec6a3e86",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-577403",
	                        "8f5c98145c70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-577403 -n newest-cni-577403
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-577403 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-577403 logs -n 25: (1.140859707s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-309062                                                                                                                                                                                                                     │ old-k8s-version-309062       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-733799                                                                                                                                                                                                                     │ cert-expiration-733799       │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:31 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:31 UTC │ 18 Oct 25 10:32 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-715182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-715182 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-101897 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-715182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-101897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-715182 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-922359                                                                                                                                                                                                               │ disable-driver-mounts-922359 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ image   │ embed-certs-101897 image list --format=json                                                                                                                                                                                                   │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p embed-certs-101897 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-577403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:34:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:34:50.166264  491315 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:34:50.166928  491315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:50.166957  491315 out.go:374] Setting ErrFile to fd 2...
	I1018 10:34:50.166977  491315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:34:50.167314  491315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:34:50.167806  491315 out.go:368] Setting JSON to false
	I1018 10:34:50.168836  491315 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8241,"bootTime":1760775450,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:34:50.168935  491315 start.go:141] virtualization:  
	I1018 10:34:50.176614  491315 out.go:179] * [newest-cni-577403] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:34:50.180155  491315 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:34:50.180227  491315 notify.go:220] Checking for updates...
	I1018 10:34:50.187966  491315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:34:50.191410  491315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:34:50.197302  491315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:34:50.200872  491315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:34:50.204307  491315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:34:50.207983  491315 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:34:50.208149  491315 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:34:50.258171  491315 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:34:50.258313  491315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:34:50.370284  491315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-18 10:34:50.360382914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:34:50.370398  491315 docker.go:318] overlay module found
	I1018 10:34:50.377127  491315 out.go:179] * Using the docker driver based on user configuration
	I1018 10:34:50.381497  491315 start.go:305] selected driver: docker
	I1018 10:34:50.381516  491315 start.go:925] validating driver "docker" against <nil>
	I1018 10:34:50.381530  491315 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:34:50.382255  491315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:34:50.463575  491315 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-18 10:34:50.454389275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:34:50.463724  491315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 10:34:50.463759  491315 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 10:34:50.463995  491315 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 10:34:50.467824  491315 out.go:179] * Using Docker driver with root privileges
	I1018 10:34:50.470996  491315 cni.go:84] Creating CNI manager for ""
	I1018 10:34:50.471065  491315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:34:50.471074  491315 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 10:34:50.471158  491315 start.go:349] cluster config:
	{Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:34:50.474564  491315 out.go:179] * Starting "newest-cni-577403" primary control-plane node in "newest-cni-577403" cluster
	I1018 10:34:50.477840  491315 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:34:50.480991  491315 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:34:50.484043  491315 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:34:50.484078  491315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:34:50.484107  491315 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:34:50.484117  491315 cache.go:58] Caching tarball of preloaded images
	I1018 10:34:50.484202  491315 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:34:50.484211  491315 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:34:50.484328  491315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/config.json ...
	I1018 10:34:50.484350  491315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/config.json: {Name:mk4139581388cdfff913e52ebe58e281b2f6dd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:34:50.509606  491315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:34:50.509623  491315 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:34:50.509637  491315 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:34:50.509660  491315 start.go:360] acquireMachinesLock for newest-cni-577403: {Name:mk1e4df99ad9f1535f8fd365f2c9b2df285e2ff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:34:50.509760  491315 start.go:364] duration metric: took 84.341µs to acquireMachinesLock for "newest-cni-577403"
	I1018 10:34:50.509784  491315 start.go:93] Provisioning new machine with config: &{Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:34:50.509852  491315 start.go:125] createHost starting for "" (driver="docker")
	I1018 10:34:46.706763  487845 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:34:48.851629  487845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.192517093s)
	I1018 10:34:48.851661  487845 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1018 10:34:48.851679  487845 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 10:34:48.851716  487845 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.144928505s)
	I1018 10:34:48.851729  487845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 10:34:48.851750  487845 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1018 10:34:48.851826  487845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1018 10:34:50.702288  487845 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.850427661s)
	I1018 10:34:50.702320  487845 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1018 10:34:50.702346  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1018 10:34:50.702791  487845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.851046884s)
	I1018 10:34:50.702815  487845 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1018 10:34:50.702842  487845 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 10:34:50.702902  487845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 10:34:50.515434  491315 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 10:34:50.515681  491315 start.go:159] libmachine.API.Create for "newest-cni-577403" (driver="docker")
	I1018 10:34:50.515726  491315 client.go:168] LocalClient.Create starting
	I1018 10:34:50.515799  491315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem
	I1018 10:34:50.515836  491315 main.go:141] libmachine: Decoding PEM data...
	I1018 10:34:50.515848  491315 main.go:141] libmachine: Parsing certificate...
	I1018 10:34:50.515905  491315 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem
	I1018 10:34:50.515922  491315 main.go:141] libmachine: Decoding PEM data...
	I1018 10:34:50.515931  491315 main.go:141] libmachine: Parsing certificate...
	I1018 10:34:50.516323  491315 cli_runner.go:164] Run: docker network inspect newest-cni-577403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 10:34:50.534707  491315 cli_runner.go:211] docker network inspect newest-cni-577403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 10:34:50.534779  491315 network_create.go:284] running [docker network inspect newest-cni-577403] to gather additional debugging logs...
	I1018 10:34:50.534796  491315 cli_runner.go:164] Run: docker network inspect newest-cni-577403
	W1018 10:34:50.551767  491315 cli_runner.go:211] docker network inspect newest-cni-577403 returned with exit code 1
	I1018 10:34:50.551796  491315 network_create.go:287] error running [docker network inspect newest-cni-577403]: docker network inspect newest-cni-577403: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-577403 not found
	I1018 10:34:50.551809  491315 network_create.go:289] output of [docker network inspect newest-cni-577403]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-577403 not found
	
	** /stderr **
	I1018 10:34:50.551912  491315 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:34:50.575890  491315 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
	I1018 10:34:50.576207  491315 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bb4a8c61b69d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:8c:0f:03:ab:d8} reservation:<nil>}
	I1018 10:34:50.576534  491315 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1d3a8356dfdc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:ce:7a:d0:e4:d4} reservation:<nil>}
	I1018 10:34:50.576821  491315 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-87a54e6a9010 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:77:85:71:80:25} reservation:<nil>}
	I1018 10:34:50.577320  491315 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b3c60}
	I1018 10:34:50.577348  491315 network_create.go:124] attempt to create docker network newest-cni-577403 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 10:34:50.577414  491315 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-577403 newest-cni-577403
	I1018 10:34:50.647564  491315 network_create.go:108] docker network newest-cni-577403 192.168.85.0/24 created
	I1018 10:34:50.647594  491315 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-577403" container
	I1018 10:34:50.647670  491315 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 10:34:50.668470  491315 cli_runner.go:164] Run: docker volume create newest-cni-577403 --label name.minikube.sigs.k8s.io=newest-cni-577403 --label created_by.minikube.sigs.k8s.io=true
	I1018 10:34:50.690807  491315 oci.go:103] Successfully created a docker volume newest-cni-577403
	I1018 10:34:50.690904  491315 cli_runner.go:164] Run: docker run --rm --name newest-cni-577403-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-577403 --entrypoint /usr/bin/test -v newest-cni-577403:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 10:34:51.579002  491315 oci.go:107] Successfully prepared a docker volume newest-cni-577403
	I1018 10:34:51.579055  491315 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:34:51.579075  491315 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 10:34:51.579167  491315 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-577403:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 10:34:52.320765  487845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.617836537s)
	I1018 10:34:52.320788  487845 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1018 10:34:52.320805  487845 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 10:34:52.320855  487845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 10:34:54.180218  487845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.859339999s)
	I1018 10:34:54.180245  487845 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1018 10:34:54.180263  487845 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1018 10:34:54.180315  487845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1018 10:34:56.979982  491315 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-577403:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.400750979s)
	I1018 10:34:56.980016  491315 kic.go:203] duration metric: took 5.400937131s to extract preloaded images to volume ...
	W1018 10:34:56.980147  491315 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:34:56.980266  491315 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:34:57.105810  491315 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-577403 --name newest-cni-577403 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-577403 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-577403 --network newest-cni-577403 --ip 192.168.85.2 --volume newest-cni-577403:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:34:57.555032  491315 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Running}}
	I1018 10:34:57.610907  491315 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:34:57.633516  491315 cli_runner.go:164] Run: docker exec newest-cni-577403 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:34:57.691074  491315 oci.go:144] the created container "newest-cni-577403" has a running status.
	I1018 10:34:57.691111  491315 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa...
	I1018 10:34:58.247679  491315 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:34:58.278791  491315 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:34:58.309525  491315 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:34:58.309547  491315 kic_runner.go:114] Args: [docker exec --privileged newest-cni-577403 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:34:58.380681  491315 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:34:58.413407  491315 machine.go:93] provisionDockerMachine start ...
	I1018 10:34:58.413594  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:34:58.441466  491315 main.go:141] libmachine: Using SSH client type: native
	I1018 10:34:58.441818  491315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I1018 10:34:58.441827  491315 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:34:58.442467  491315 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 10:34:58.875710  487845 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.695371339s)
	I1018 10:34:58.875745  487845 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1018 10:34:58.875765  487845 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1018 10:34:58.875812  487845 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1018 10:34:59.716544  487845 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1018 10:34:59.716580  487845 cache_images.go:124] Successfully loaded all cached images
	I1018 10:34:59.716586  487845 cache_images.go:93] duration metric: took 18.28418225s to LoadCachedImages
	I1018 10:34:59.716597  487845 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 10:34:59.716690  487845 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-027087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:34:59.716789  487845 ssh_runner.go:195] Run: crio config
	I1018 10:34:59.775188  487845 cni.go:84] Creating CNI manager for ""
	I1018 10:34:59.775211  487845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:34:59.775230  487845 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:34:59.775255  487845 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-027087 NodeName:no-preload-027087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:34:59.775391  487845 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-027087"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:34:59.775463  487845 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:34:59.784891  487845 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1018 10:34:59.784984  487845 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1018 10:34:59.792861  487845 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1018 10:34:59.792951  487845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1018 10:34:59.793486  487845 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1018 10:34:59.793535  487845 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1018 10:34:59.798214  487845 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1018 10:34:59.798248  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1018 10:35:00.837698  487845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:35:00.856300  487845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1018 10:35:00.864601  487845 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1018 10:35:00.865816  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1018 10:35:01.227028  487845 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1018 10:35:01.240826  487845 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1018 10:35:01.240870  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1018 10:35:01.729353  487845 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:35:01.753641  487845 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 10:35:01.773316  487845 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:35:01.794946  487845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 10:35:01.822168  487845 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:35:01.826668  487845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:35:01.837294  487845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:01.979070  487845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:02.007550  487845 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087 for IP: 192.168.76.2
	I1018 10:35:02.007576  487845 certs.go:195] generating shared ca certs ...
	I1018 10:35:02.007592  487845 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:02.007747  487845 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:35:02.007808  487845 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:35:02.007821  487845 certs.go:257] generating profile certs ...
	I1018 10:35:02.007881  487845 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.key
	I1018 10:35:02.007896  487845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt with IP's: []
	I1018 10:35:02.434946  487845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt ...
	I1018 10:35:02.435026  487845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: {Name:mk8f2dbb26048d09c2e4ad3b3fa5d79d0ced7d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:02.435252  487845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.key ...
	I1018 10:35:02.435291  487845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.key: {Name:mkeacca0794408053b993f60ba26f3c65a90e179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:02.435422  487845 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key.1343fb15
	I1018 10:35:02.435474  487845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.crt.1343fb15 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 10:35:03.097158  487845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.crt.1343fb15 ...
	I1018 10:35:03.097207  487845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.crt.1343fb15: {Name:mk6e10da4c36d82f059e838d198ad0e98dda716d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:03.097384  487845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key.1343fb15 ...
	I1018 10:35:03.097401  487845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key.1343fb15: {Name:mkef3b48d269de2733b2d1459a847e7c7693b9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:03.097480  487845 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.crt.1343fb15 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.crt
	I1018 10:35:03.097556  487845 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key.1343fb15 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key
	I1018 10:35:03.097613  487845 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.key
	I1018 10:35:03.097627  487845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.crt with IP's: []
	I1018 10:35:03.631848  487845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.crt ...
	I1018 10:35:03.631904  487845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.crt: {Name:mk995eec2245fcfe7f0394435a9fcc71fdba9e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:03.632162  487845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.key ...
	I1018 10:35:03.632201  487845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.key: {Name:mk790ec67fd38b87a58eaf5dc7fbadb1e0fe53a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:03.632485  487845 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:35:03.632555  487845 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:35:03.632582  487845 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:35:03.632637  487845 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:35:03.632687  487845 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:35:03.632739  487845 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:35:03.632809  487845 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:35:03.643313  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:35:03.665575  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:35:03.700933  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:35:03.740401  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:35:03.774262  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 10:35:03.793549  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:35:03.812222  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:35:03.830486  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:35:03.853627  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:35:03.874935  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:35:03.909884  487845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:35:03.936519  487845 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:35:03.950118  487845 ssh_runner.go:195] Run: openssl version
	I1018 10:35:03.958838  487845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:35:03.968058  487845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:35:03.972215  487845 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:35:03.972279  487845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:35:04.019001  487845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:35:04.032144  487845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:35:04.043812  487845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:35:04.048113  487845 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:35:04.048183  487845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:35:04.105795  487845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:35:04.122197  487845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:35:04.133733  487845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:04.138389  487845 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:04.138506  487845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:04.181516  487845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:35:04.194373  487845 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:35:04.199578  487845 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:35:04.199628  487845 kubeadm.go:400] StartCluster: {Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:04.199699  487845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:35:04.199760  487845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:35:04.227231  487845 cri.go:89] found id: ""
	I1018 10:35:04.227308  487845 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:35:04.243654  487845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:35:04.251944  487845 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:35:04.252009  487845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:35:04.267464  487845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:35:04.267486  487845 kubeadm.go:157] found existing configuration files:
	
	I1018 10:35:04.267536  487845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 10:35:04.278841  487845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:35:04.278909  487845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:35:04.286658  487845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 10:35:04.297806  487845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:35:04.297873  487845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:35:04.305634  487845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 10:35:04.313903  487845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:35:04.313964  487845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:35:04.321121  487845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 10:35:04.329585  487845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:35:04.329654  487845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:35:04.337020  487845 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:35:04.397809  487845 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 10:35:04.398848  487845 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:35:04.440692  487845 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:35:04.440774  487845 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:35:04.440819  487845 kubeadm.go:318] OS: Linux
	I1018 10:35:04.440876  487845 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:35:04.440934  487845 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:35:04.440996  487845 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:35:04.441054  487845 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:35:04.441112  487845 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:35:04.441173  487845 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:35:04.441306  487845 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:35:04.441368  487845 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:35:04.441424  487845 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:35:04.563510  487845 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:35:04.563664  487845 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:35:04.563783  487845 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 10:35:04.589594  487845 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:35:01.710849  491315 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-577403
	
	I1018 10:35:01.710891  491315 ubuntu.go:182] provisioning hostname "newest-cni-577403"
	I1018 10:35:01.710974  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:01.742123  491315 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:01.742450  491315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I1018 10:35:01.742462  491315 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-577403 && echo "newest-cni-577403" | sudo tee /etc/hostname
	I1018 10:35:01.925395  491315 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-577403
	
	I1018 10:35:01.925493  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:01.950910  491315 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:01.951227  491315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I1018 10:35:01.951254  491315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-577403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-577403/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-577403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:35:02.137838  491315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:35:02.137860  491315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:35:02.137885  491315 ubuntu.go:190] setting up certificates
	I1018 10:35:02.137895  491315 provision.go:84] configureAuth start
	I1018 10:35:02.137955  491315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:02.157975  491315 provision.go:143] copyHostCerts
	I1018 10:35:02.158046  491315 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:35:02.158056  491315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:35:02.158146  491315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:35:02.158248  491315 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:35:02.158253  491315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:35:02.158280  491315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:35:02.158348  491315 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:35:02.158352  491315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:35:02.158375  491315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:35:02.158424  491315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.newest-cni-577403 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-577403]
	I1018 10:35:02.335377  491315 provision.go:177] copyRemoteCerts
	I1018 10:35:02.335492  491315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:35:02.335550  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:02.355301  491315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:02.461889  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:35:02.482336  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 10:35:02.505236  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:35:02.535698  491315 provision.go:87] duration metric: took 397.755452ms to configureAuth
	I1018 10:35:02.535727  491315 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:35:02.535944  491315 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:02.536057  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:02.557988  491315 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:02.558337  491315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I1018 10:35:02.558352  491315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:35:02.848192  491315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:35:02.848215  491315 machine.go:96] duration metric: took 4.434790378s to provisionDockerMachine
	I1018 10:35:02.848233  491315 client.go:171] duration metric: took 12.332493954s to LocalClient.Create
	I1018 10:35:02.848259  491315 start.go:167] duration metric: took 12.332572577s to libmachine.API.Create "newest-cni-577403"
	I1018 10:35:02.848268  491315 start.go:293] postStartSetup for "newest-cni-577403" (driver="docker")
	I1018 10:35:02.848285  491315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:35:02.848359  491315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:35:02.848412  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:02.876748  491315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:02.995258  491315 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:35:02.999647  491315 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:35:02.999673  491315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:35:02.999684  491315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:35:02.999740  491315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:35:02.999815  491315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:35:02.999916  491315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:35:03.008615  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:35:03.042733  491315 start.go:296] duration metric: took 194.449199ms for postStartSetup
	I1018 10:35:03.043135  491315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:03.062135  491315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/config.json ...
	I1018 10:35:03.062424  491315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:35:03.062472  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:03.083054  491315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:03.187501  491315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:35:03.193709  491315 start.go:128] duration metric: took 12.68384096s to createHost
	I1018 10:35:03.193730  491315 start.go:83] releasing machines lock for "newest-cni-577403", held for 12.683962028s
	I1018 10:35:03.193813  491315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:03.213682  491315 ssh_runner.go:195] Run: cat /version.json
	I1018 10:35:03.213731  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:03.213755  491315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:35:03.213809  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:03.249042  491315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:03.258689  491315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:03.373298  491315 ssh_runner.go:195] Run: systemctl --version
	I1018 10:35:03.483874  491315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:35:03.530241  491315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:35:03.535654  491315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:35:03.535800  491315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:35:03.569245  491315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:35:03.569273  491315 start.go:495] detecting cgroup driver to use...
	I1018 10:35:03.569322  491315 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:35:03.569396  491315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:35:03.590877  491315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:35:03.607319  491315 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:35:03.607392  491315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:35:03.627429  491315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:35:03.650267  491315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:35:03.836117  491315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:35:03.996072  491315 docker.go:234] disabling docker service ...
	I1018 10:35:03.996143  491315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:35:04.024706  491315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:35:04.040674  491315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:35:04.184586  491315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:35:04.360273  491315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:35:04.377796  491315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:35:04.394929  491315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:35:04.395002  491315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:04.421291  491315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:35:04.421363  491315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:04.434395  491315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:04.446386  491315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:04.463884  491315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:35:04.472659  491315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:04.482074  491315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:04.510674  491315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:04.531000  491315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:35:04.541744  491315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:35:04.552363  491315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:04.709604  491315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:35:04.935952  491315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:35:04.936034  491315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:35:04.940151  491315 start.go:563] Will wait 60s for crictl version
	I1018 10:35:04.940229  491315 ssh_runner.go:195] Run: which crictl
	I1018 10:35:04.943884  491315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:35:04.984793  491315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:35:04.984887  491315 ssh_runner.go:195] Run: crio --version
	I1018 10:35:05.028458  491315 ssh_runner.go:195] Run: crio --version
	I1018 10:35:05.073976  491315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:35:05.077167  491315 cli_runner.go:164] Run: docker network inspect newest-cni-577403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:35:05.098785  491315 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:35:05.103480  491315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:35:05.121539  491315 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 10:35:05.124346  491315 kubeadm.go:883] updating cluster {Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:35:05.124486  491315 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:35:05.124590  491315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:35:04.596364  487845 out.go:252]   - Generating certificates and keys ...
	I1018 10:35:04.596457  487845 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:35:04.596534  487845 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:35:05.657536  487845 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:35:06.353080  487845 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:35:05.168035  491315 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:35:05.168054  491315 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:35:05.168120  491315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:35:05.212897  491315 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:35:05.212923  491315 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:35:05.212931  491315 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:35:05.213020  491315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-577403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:35:05.213107  491315 ssh_runner.go:195] Run: crio config
	I1018 10:35:05.302805  491315 cni.go:84] Creating CNI manager for ""
	I1018 10:35:05.302828  491315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:35:05.302851  491315 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 10:35:05.302877  491315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-577403 NodeName:newest-cni-577403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:35:05.303018  491315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-577403"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:35:05.303098  491315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:35:05.311623  491315 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:35:05.311692  491315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:35:05.319656  491315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 10:35:05.334078  491315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:35:05.347813  491315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 10:35:05.363833  491315 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:35:05.367785  491315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:35:05.378336  491315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:05.526597  491315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:05.547706  491315 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403 for IP: 192.168.85.2
	I1018 10:35:05.547730  491315 certs.go:195] generating shared ca certs ...
	I1018 10:35:05.547747  491315 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:05.547889  491315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:35:05.547952  491315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:35:05.547964  491315 certs.go:257] generating profile certs ...
	I1018 10:35:05.548036  491315 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/client.key
	I1018 10:35:05.548062  491315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/client.crt with IP's: []
	I1018 10:35:05.896509  491315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/client.crt ...
	I1018 10:35:05.896542  491315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/client.crt: {Name:mk4a620f6e252edfbe6fa039f5ef0d3c3124c470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:05.896769  491315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/client.key ...
	I1018 10:35:05.896784  491315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/client.key: {Name:mkb82c947dcbd71aebf03010d22d8ba4b6ed4a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:05.896883  491315 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key.da20550e
	I1018 10:35:05.896901  491315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.crt.da20550e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 10:35:06.790261  491315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.crt.da20550e ...
	I1018 10:35:06.790296  491315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.crt.da20550e: {Name:mk6425767ce5ca0eed7ca0a74e3d5f7ee290a6b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:06.790520  491315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key.da20550e ...
	I1018 10:35:06.790537  491315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key.da20550e: {Name:mk652411cf65785102c5660ce6b3617cc107d612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:06.790635  491315 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.crt.da20550e -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.crt
	I1018 10:35:06.790725  491315 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key.da20550e -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key
	I1018 10:35:06.790786  491315 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key
	I1018 10:35:06.790804  491315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.crt with IP's: []
	I1018 10:35:07.522584  491315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.crt ...
	I1018 10:35:07.522616  491315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.crt: {Name:mka42c700b058d597902cd2c25e8306eee264cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:07.522837  491315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key ...
	I1018 10:35:07.522854  491315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key: {Name:mkada1fa9a21ae66bfc6b16d7644562f52613443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:07.523070  491315 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:35:07.523116  491315 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:35:07.523130  491315 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:35:07.523154  491315 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:35:07.523181  491315 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:35:07.523210  491315 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:35:07.523257  491315 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:35:07.523940  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:35:07.542708  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:35:07.562194  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:35:07.594770  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:35:07.615258  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 10:35:07.635674  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:35:07.656954  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:35:07.683065  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:35:07.715700  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:35:07.738237  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:35:07.776748  491315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:35:07.795979  491315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:35:07.809275  491315 ssh_runner.go:195] Run: openssl version
	I1018 10:35:07.815982  491315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:35:07.824393  491315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:35:07.828627  491315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:35:07.828694  491315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:35:07.871037  491315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:35:07.879705  491315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:35:07.888034  491315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:35:07.892272  491315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:35:07.892341  491315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:35:07.935143  491315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:35:07.943448  491315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:35:07.951467  491315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:07.955790  491315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:07.955854  491315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:07.997122  491315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:35:08.005555  491315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:35:08.010455  491315 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:35:08.010519  491315 kubeadm.go:400] StartCluster: {Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:08.010601  491315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:35:08.010663  491315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:35:08.047649  491315 cri.go:89] found id: ""
	I1018 10:35:08.047721  491315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:35:08.057594  491315 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:35:08.065721  491315 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:35:08.065787  491315 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:35:08.076221  491315 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:35:08.076241  491315 kubeadm.go:157] found existing configuration files:
	
	I1018 10:35:08.076293  491315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 10:35:08.084903  491315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:35:08.084971  491315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:35:08.092790  491315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 10:35:08.101387  491315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:35:08.101458  491315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:35:08.109270  491315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 10:35:08.117782  491315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:35:08.117847  491315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:35:08.126552  491315 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 10:35:08.136261  491315 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:35:08.136340  491315 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:35:08.144806  491315 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:35:08.197062  491315 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 10:35:08.198157  491315 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:35:08.249869  491315 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:35:08.250961  491315 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:35:08.251012  491315 kubeadm.go:318] OS: Linux
	I1018 10:35:08.251065  491315 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:35:08.251120  491315 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:35:08.251174  491315 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:35:08.251227  491315 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:35:08.251282  491315 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:35:08.251336  491315 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:35:08.251401  491315 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:35:08.251456  491315 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:35:08.251508  491315 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:35:08.367390  491315 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:35:08.367508  491315 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:35:08.367607  491315 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 10:35:08.381765  491315 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:35:08.387805  491315 out.go:252]   - Generating certificates and keys ...
	I1018 10:35:08.387923  491315 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:35:08.388034  491315 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:35:09.184163  491315 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:35:09.621588  491315 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:35:07.001445  487845 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:35:07.641838  487845 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:35:08.155427  487845 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:35:08.161600  487845 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-027087] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:35:09.837564  487845 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:35:09.838164  487845 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-027087] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 10:35:10.285918  487845 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:35:10.412377  487845 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:35:10.755639  487845 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:35:10.756204  487845 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:35:10.916985  487845 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:35:11.035835  487845 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 10:35:11.170636  487845 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:35:11.405560  487845 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:35:11.811342  487845 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:35:11.812432  487845 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:35:11.815451  487845 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:35:10.465444  491315 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:35:11.353526  491315 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:35:11.758527  491315 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:35:11.758670  491315 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-577403] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:35:12.461538  491315 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:35:12.461680  491315 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-577403] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:35:13.081637  491315 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:35:14.212613  491315 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:35:14.438816  491315 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:35:14.439134  491315 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:35:14.559661  491315 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:35:15.001562  491315 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 10:35:11.819102  487845 out.go:252]   - Booting up control plane ...
	I1018 10:35:11.819226  487845 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:35:11.819315  487845 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:35:11.820965  487845 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:35:11.845588  487845 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:35:11.845707  487845 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 10:35:11.858130  487845 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 10:35:11.858240  487845 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:35:11.858287  487845 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:35:12.066744  487845 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 10:35:12.066876  487845 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 10:35:13.069602  487845 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00071413s
	I1018 10:35:13.070997  487845 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 10:35:13.071099  487845 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 10:35:13.071207  487845 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 10:35:13.071295  487845 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 10:35:15.652490  491315 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:35:15.745576  491315 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:35:17.637592  491315 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:35:17.637694  491315 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:35:17.641566  491315 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:35:17.645055  491315 out.go:252]   - Booting up control plane ...
	I1018 10:35:17.645162  491315 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:35:17.645265  491315 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:35:17.645336  491315 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:35:17.666805  491315 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:35:17.666919  491315 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 10:35:17.683334  491315 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 10:35:17.684128  491315 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:35:17.684529  491315 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:35:17.895140  491315 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 10:35:17.895265  491315 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 10:35:19.897266  491315 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001434163s
	I1018 10:35:19.900372  491315 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 10:35:19.900774  491315 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 10:35:19.901103  491315 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 10:35:19.901971  491315 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 10:35:18.581818  487845 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.509746388s
	I1018 10:35:21.264238  487845 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.193163137s
	I1018 10:35:22.573453  487845 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.502293339s
	I1018 10:35:22.594801  487845 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:35:22.615888  487845 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:35:22.636734  487845 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:35:22.636951  487845 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-027087 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:35:22.661523  487845 kubeadm.go:318] [bootstrap-token] Using token: d64hvm.xkgcav2o90jq0m59
	I1018 10:35:22.664535  487845 out.go:252]   - Configuring RBAC rules ...
	I1018 10:35:22.664666  487845 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:35:22.676037  487845 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:35:22.689251  487845 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:35:22.694329  487845 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:35:22.705124  487845 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:35:22.711924  487845 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:35:22.981379  487845 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:35:23.496803  487845 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:35:23.980461  487845 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:35:23.982134  487845 kubeadm.go:318] 
	I1018 10:35:23.982224  487845 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:35:23.982230  487845 kubeadm.go:318] 
	I1018 10:35:23.982310  487845 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:35:23.982315  487845 kubeadm.go:318] 
	I1018 10:35:23.982341  487845 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:35:23.982799  487845 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:35:23.982865  487845 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:35:23.982871  487845 kubeadm.go:318] 
	I1018 10:35:23.982927  487845 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:35:23.982932  487845 kubeadm.go:318] 
	I1018 10:35:23.982981  487845 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:35:23.982986  487845 kubeadm.go:318] 
	I1018 10:35:23.983040  487845 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:35:23.983118  487845 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:35:23.983188  487845 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:35:23.983193  487845 kubeadm.go:318] 
	I1018 10:35:23.983488  487845 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:35:23.983589  487845 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:35:23.983595  487845 kubeadm.go:318] 
	I1018 10:35:23.983902  487845 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token d64hvm.xkgcav2o90jq0m59 \
	I1018 10:35:23.984025  487845 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:35:23.984229  487845 kubeadm.go:318] 	--control-plane 
	I1018 10:35:23.984239  487845 kubeadm.go:318] 
	I1018 10:35:23.984524  487845 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:35:23.984535  487845 kubeadm.go:318] 
	I1018 10:35:23.984840  487845 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token d64hvm.xkgcav2o90jq0m59 \
	I1018 10:35:23.985134  487845 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:35:23.990792  487845 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 10:35:23.991169  487845 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:35:23.991354  487845 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:35:23.991385  487845 cni.go:84] Creating CNI manager for ""
	I1018 10:35:23.991422  487845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:35:23.996778  487845 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:35:23.285020  491315 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.382604138s
	I1018 10:35:23.999803  487845 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:35:24.009835  487845 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 10:35:24.009856  487845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:35:24.061082  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:35:24.560660  487845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:35:24.560794  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:24.560880  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-027087 minikube.k8s.io/updated_at=2025_10_18T10_35_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=no-preload-027087 minikube.k8s.io/primary=true
	I1018 10:35:24.874229  487845 ops.go:34] apiserver oom_adj: -16
	I1018 10:35:24.874339  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:25.375220  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:25.874840  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:26.375241  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:25.607152  491315 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.704522804s
	I1018 10:35:27.404879  491315 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.503328354s
	I1018 10:35:27.429357  491315 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:35:27.455948  491315 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:35:27.494085  491315 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:35:27.494558  491315 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-577403 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:35:27.514634  491315 kubeadm.go:318] [bootstrap-token] Using token: ec5iln.ewxbtzh2z7f9914k
	I1018 10:35:26.874731  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:27.374407  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:27.874981  487845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:28.066046  487845 kubeadm.go:1113] duration metric: took 3.505292972s to wait for elevateKubeSystemPrivileges
	I1018 10:35:28.066074  487845 kubeadm.go:402] duration metric: took 23.866450334s to StartCluster
	I1018 10:35:28.066092  487845 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:28.066159  487845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:28.066804  487845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:28.067055  487845 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:35:28.067207  487845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:35:28.067486  487845 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:28.067527  487845 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:35:28.067591  487845 addons.go:69] Setting storage-provisioner=true in profile "no-preload-027087"
	I1018 10:35:28.067607  487845 addons.go:238] Setting addon storage-provisioner=true in "no-preload-027087"
	I1018 10:35:28.067632  487845 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:35:28.068180  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:35:28.068332  487845 addons.go:69] Setting default-storageclass=true in profile "no-preload-027087"
	I1018 10:35:28.068354  487845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-027087"
	I1018 10:35:28.068707  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:35:28.070542  487845 out.go:179] * Verifying Kubernetes components...
	I1018 10:35:28.074032  487845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:28.108489  487845 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:35:27.517735  491315 out.go:252]   - Configuring RBAC rules ...
	I1018 10:35:27.517863  491315 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:35:27.528429  491315 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:35:27.543585  491315 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:35:27.550275  491315 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:35:27.555287  491315 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:35:27.561317  491315 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:35:27.812281  491315 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:35:28.274687  491315 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:35:28.835460  491315 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:35:28.836686  491315 kubeadm.go:318] 
	I1018 10:35:28.836763  491315 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:35:28.836768  491315 kubeadm.go:318] 
	I1018 10:35:28.836848  491315 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:35:28.836854  491315 kubeadm.go:318] 
	I1018 10:35:28.836880  491315 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:35:28.836942  491315 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:35:28.837001  491315 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:35:28.837006  491315 kubeadm.go:318] 
	I1018 10:35:28.837062  491315 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:35:28.837067  491315 kubeadm.go:318] 
	I1018 10:35:28.837117  491315 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:35:28.837122  491315 kubeadm.go:318] 
	I1018 10:35:28.837176  491315 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:35:28.837331  491315 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:35:28.837404  491315 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:35:28.837409  491315 kubeadm.go:318] 
	I1018 10:35:28.837497  491315 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:35:28.837576  491315 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:35:28.837581  491315 kubeadm.go:318] 
	I1018 10:35:28.837668  491315 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ec5iln.ewxbtzh2z7f9914k \
	I1018 10:35:28.837775  491315 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:35:28.837797  491315 kubeadm.go:318] 	--control-plane 
	I1018 10:35:28.837801  491315 kubeadm.go:318] 
	I1018 10:35:28.837889  491315 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:35:28.837894  491315 kubeadm.go:318] 
	I1018 10:35:28.837988  491315 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ec5iln.ewxbtzh2z7f9914k \
	I1018 10:35:28.838095  491315 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:35:28.844839  491315 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 10:35:28.845098  491315 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:35:28.845222  491315 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:35:28.845239  491315 cni.go:84] Creating CNI manager for ""
	I1018 10:35:28.845247  491315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:35:28.848454  491315 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:35:28.851471  491315 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:35:28.861156  491315 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 10:35:28.861177  491315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:35:28.886584  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:35:29.526785  491315 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:35:29.526908  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:29.526980  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-577403 minikube.k8s.io/updated_at=2025_10_18T10_35_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=newest-cni-577403 minikube.k8s.io/primary=true
	I1018 10:35:29.946608  491315 ops.go:34] apiserver oom_adj: -16
	I1018 10:35:29.946732  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:28.111480  487845 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:28.111510  487845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:35:28.111583  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:35:28.116125  487845 addons.go:238] Setting addon default-storageclass=true in "no-preload-027087"
	I1018 10:35:28.116186  487845 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:35:28.117117  487845 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:35:28.152549  487845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:35:28.163558  487845 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:28.163581  487845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:35:28.163651  487845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:35:28.194318  487845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:35:28.670198  487845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:28.693609  487845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:35:28.693727  487845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:28.717941  487845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:29.748389  487845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.078153128s)
	I1018 10:35:30.008890  487845 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.315134135s)
	I1018 10:35:30.009799  487845 node_ready.go:35] waiting up to 6m0s for node "no-preload-027087" to be "Ready" ...
	I1018 10:35:30.010345  487845 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.316694737s)
	I1018 10:35:30.010390  487845 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 10:35:30.277888  487845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.559896401s)
	I1018 10:35:30.282960  487845 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 10:35:30.285090  487845 addons.go:514] duration metric: took 2.21754135s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1018 10:35:30.518388  487845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-027087" context rescaled to 1 replicas
	I1018 10:35:30.447691  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:30.946841  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:31.446844  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:31.947733  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:32.447546  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:32.947737  491315 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:35:33.085350  491315 kubeadm.go:1113] duration metric: took 3.558486449s to wait for elevateKubeSystemPrivileges
	I1018 10:35:33.085382  491315 kubeadm.go:402] duration metric: took 25.074867779s to StartCluster
	I1018 10:35:33.085399  491315 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:33.085471  491315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:33.086428  491315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:33.086709  491315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:35:33.086715  491315 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:35:33.086995  491315 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:33.087036  491315 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:35:33.087113  491315 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-577403"
	I1018 10:35:33.087122  491315 addons.go:69] Setting default-storageclass=true in profile "newest-cni-577403"
	I1018 10:35:33.087156  491315 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-577403"
	I1018 10:35:33.087126  491315 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-577403"
	I1018 10:35:33.087286  491315 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:33.087506  491315 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:33.087702  491315 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:33.091480  491315 out.go:179] * Verifying Kubernetes components...
	I1018 10:35:33.094654  491315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:33.127855  491315 addons.go:238] Setting addon default-storageclass=true in "newest-cni-577403"
	I1018 10:35:33.127909  491315 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:33.128325  491315 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:33.143010  491315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:35:33.147387  491315 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:33.147417  491315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:35:33.147498  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:33.181795  491315 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:33.181823  491315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:35:33.181887  491315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:33.201170  491315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:33.218531  491315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:33.704032  491315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:33.728511  491315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:33.772567  491315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:35:33.772724  491315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:34.800089  491315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.071542156s)
	I1018 10:35:34.800394  491315 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.027649914s)
	I1018 10:35:34.800591  491315 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.027990964s)
	I1018 10:35:34.800683  491315 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 10:35:34.801495  491315 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:35:34.801559  491315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:35:34.804182  491315 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 10:35:34.807121  491315 addons.go:514] duration metric: took 1.720071824s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1018 10:35:34.825511  491315 api_server.go:72] duration metric: took 1.738764433s to wait for apiserver process to appear ...
	I1018 10:35:34.825539  491315 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:35:34.825558  491315 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:35:34.844947  491315 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 10:35:34.848244  491315 api_server.go:141] control plane version: v1.34.1
	I1018 10:35:34.848276  491315 api_server.go:131] duration metric: took 22.72986ms to wait for apiserver health ...
	I1018 10:35:34.848285  491315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:35:34.855865  491315 system_pods.go:59] 9 kube-system pods found
	I1018 10:35:34.855914  491315 system_pods.go:61] "coredns-66bc5c9577-bmcxr" [cec980f4-8542-4f1d-a671-a2f82f924274] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:35:34.855925  491315 system_pods.go:61] "coredns-66bc5c9577-g5hjd" [d8506151-9057-4d64-9951-94bfc8e48157] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:35:34.855961  491315 system_pods.go:61] "etcd-newest-cni-577403" [9061973a-4cc4-4701-ac68-b463a5c36efe] Running
	I1018 10:35:34.855979  491315 system_pods.go:61] "kindnet-dc6mn" [59b45574-ece2-4376-aacf-8e87cb8f03e7] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 10:35:34.855987  491315 system_pods.go:61] "kube-apiserver-newest-cni-577403" [bfab2b0b-ff85-4eb8-8e64-157577d51881] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:35:34.855997  491315 system_pods.go:61] "kube-controller-manager-newest-cni-577403" [0ffcd3ef-9adb-437c-9c04-32638238a83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:35:34.856003  491315 system_pods.go:61] "kube-proxy-4twn2" [060f019f-35b3-47a0-af70-f480829d1715] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 10:35:34.856008  491315 system_pods.go:61] "kube-scheduler-newest-cni-577403" [6c0fc2df-7ebe-4634-828f-7febca31dffc] Running
	I1018 10:35:34.856032  491315 system_pods.go:61] "storage-provisioner" [2f727e8b-afd6-4e3e-96f3-a9d649d239ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:35:34.856059  491315 system_pods.go:74] duration metric: took 7.76656ms to wait for pod list to return data ...
	I1018 10:35:34.856069  491315 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:35:34.861092  491315 default_sa.go:45] found service account: "default"
	I1018 10:35:34.861122  491315 default_sa.go:55] duration metric: took 5.046444ms for default service account to be created ...
	I1018 10:35:34.861136  491315 kubeadm.go:586] duration metric: took 1.774393797s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 10:35:34.861239  491315 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:35:34.868557  491315 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:35:34.868585  491315 node_conditions.go:123] node cpu capacity is 2
	I1018 10:35:34.868599  491315 node_conditions.go:105] duration metric: took 7.349874ms to run NodePressure ...
	I1018 10:35:34.868635  491315 start.go:241] waiting for startup goroutines ...
	I1018 10:35:35.304978  491315 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-577403" context rescaled to 1 replicas
	I1018 10:35:35.305025  491315 start.go:246] waiting for cluster config update ...
	I1018 10:35:35.305065  491315 start.go:255] writing updated cluster config ...
	I1018 10:35:35.305415  491315 ssh_runner.go:195] Run: rm -f paused
	I1018 10:35:35.380695  491315 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:35:35.386954  491315 out.go:179] * Done! kubectl is now configured to use "newest-cni-577403" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.048677248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.054032915Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=58fdbd59-5a88-4d55-a317-d8f07ca86334 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.058602209Z" level=info msg="Ran pod sandbox 4f283a931040a901f573b2dbbdd9c7a3eaa19a0053de23ac0a3898dbf381be5c with infra container: kube-system/kube-proxy-4twn2/POD" id=58fdbd59-5a88-4d55-a317-d8f07ca86334 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.063165916Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dd1a47ee-c227-4ad5-a75d-75a0094f5799 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.064219328Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d1de3c07-0989-44a9-937e-c1f45e410395 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.072543524Z" level=info msg="Creating container: kube-system/kube-proxy-4twn2/kube-proxy" id=7552c586-ed15-4aa6-a81f-14e0a1d29732 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.072949412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.07673221Z" level=info msg="Running pod sandbox: kube-system/kindnet-dc6mn/POD" id=1543a2e1-4b35-42dc-b3e5-9db6d245026a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.076827769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.082138455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.082941065Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1543a2e1-4b35-42dc-b3e5-9db6d245026a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.083100829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.090968609Z" level=info msg="Ran pod sandbox dcbf0f1f5a9390f91bfd3e181825040531cdc8a8fb46b3639f11b97c219070ba with infra container: kube-system/kindnet-dc6mn/POD" id=1543a2e1-4b35-42dc-b3e5-9db6d245026a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.093557613Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=41d96b6a-b4f1-468c-b394-7b30cba4fff4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.095191561Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3552033c-f191-4a1a-852f-f773736d5e57 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.102251741Z" level=info msg="Creating container: kube-system/kindnet-dc6mn/kindnet-cni" id=c4630dbf-28aa-400d-b978-56cfcc7ce027 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.104739698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.113178176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.118945565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.124346476Z" level=info msg="Created container 14dc8c7453d602e5f20c9fa3c8db77acd6b45e35376d72f17c815883ec9251aa: kube-system/kube-proxy-4twn2/kube-proxy" id=7552c586-ed15-4aa6-a81f-14e0a1d29732 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.127768441Z" level=info msg="Starting container: 14dc8c7453d602e5f20c9fa3c8db77acd6b45e35376d72f17c815883ec9251aa" id=7be2cbad-5d9e-4afc-b1b9-d7bf88183fe5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.139152128Z" level=info msg="Started container" PID=1488 containerID=14dc8c7453d602e5f20c9fa3c8db77acd6b45e35376d72f17c815883ec9251aa description=kube-system/kube-proxy-4twn2/kube-proxy id=7be2cbad-5d9e-4afc-b1b9-d7bf88183fe5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f283a931040a901f573b2dbbdd9c7a3eaa19a0053de23ac0a3898dbf381be5c
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.14478263Z" level=info msg="Created container 3e8eeca730a6461348602b9cd909351ba56128f8116dda543f6c84cecc878374: kube-system/kindnet-dc6mn/kindnet-cni" id=c4630dbf-28aa-400d-b978-56cfcc7ce027 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.148402268Z" level=info msg="Starting container: 3e8eeca730a6461348602b9cd909351ba56128f8116dda543f6c84cecc878374" id=0e2995fb-aba3-4d4c-a751-fd69bc45f87a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:35:35 newest-cni-577403 crio[844]: time="2025-10-18T10:35:35.154366459Z" level=info msg="Started container" PID=1493 containerID=3e8eeca730a6461348602b9cd909351ba56128f8116dda543f6c84cecc878374 description=kube-system/kindnet-dc6mn/kindnet-cni id=0e2995fb-aba3-4d4c-a751-fd69bc45f87a name=/runtime.v1.RuntimeService/StartContainer sandboxID=dcbf0f1f5a9390f91bfd3e181825040531cdc8a8fb46b3639f11b97c219070ba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3e8eeca730a64       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   dcbf0f1f5a939       kindnet-dc6mn                               kube-system
	14dc8c7453d60       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   4f283a931040a       kube-proxy-4twn2                            kube-system
	fcf942f72d2b2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      0                   444a91a1363f8       etcd-newest-cni-577403                      kube-system
	4b0e4cbd7e411       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   0                   319bd7f78029a       kube-controller-manager-newest-cni-577403   kube-system
	c0f1f1338e6ba       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            0                   5513f3688d86f       kube-scheduler-newest-cni-577403            kube-system
	205fb7e59e127       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            0                   b11eef47fd2fa       kube-apiserver-newest-cni-577403            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-577403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-577403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=newest-cni-577403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_35_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:35:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-577403
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:35:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:35:28 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:35:28 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:35:28 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 10:35:28 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-577403
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                094f3965-d36a-4b5c-959d-94a9f33348db
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-577403                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-dc6mn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-577403             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-577403    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-4twn2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-577403             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-577403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-577403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-577403 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-577403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-577403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-577403 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-577403 event: Registered Node newest-cni-577403 in Controller
	
	
	==> dmesg <==
	[Oct18 10:15] overlayfs: idmapped layers are currently not supported
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fcf942f72d2b2b716907ca23b5dc470d90a231bba4f54252503eb90038d86062] <==
	{"level":"warn","ts":"2025-10-18T10:35:23.568196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.616185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.645149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.673500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.697290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.726299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.754806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.794179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.817905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.859418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.897700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.921674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.936965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.951501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:23.996346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.025654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.069159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.097226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.128827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.168399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.209142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.238297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.273550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.299379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:24.411751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33960","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:35:37 up  2:18,  0 user,  load average: 4.48, 4.34, 3.38
	Linux newest-cni-577403 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e8eeca730a6461348602b9cd909351ba56128f8116dda543f6c84cecc878374] <==
	I1018 10:35:35.219812       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:35:35.309497       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 10:35:35.309751       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:35:35.309814       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:35:35.309856       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:35:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:35:35.515623       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:35:35.515744       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:35:35.515782       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:35:35.519877       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [205fb7e59e127449342c8177a536c5949ecc1f334f22e8f0062816e2ed9522f9] <==
	I1018 10:35:25.658800       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 10:35:25.661096       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1018 10:35:25.661656       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:35:25.692169       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:35:25.693645       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 10:35:25.709824       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:35:25.716164       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:35:25.869982       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:35:26.277494       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 10:35:26.283516       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 10:35:26.283540       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:35:27.218676       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:35:27.275703       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:35:27.369619       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 10:35:27.380354       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 10:35:27.382462       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:35:27.389461       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:35:27.458727       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:35:28.230937       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:35:28.270433       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 10:35:28.294952       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 10:35:33.411021       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 10:35:33.694172       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:35:33.712032       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:35:33.854432       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4b0e4cbd7e4113f4ac83a5da637f068e9d101dd886de77447e631481b20c43a8] <==
	I1018 10:35:32.510422       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 10:35:32.524991       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:35:32.525045       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 10:35:32.525114       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:35:32.528659       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-577403" podCIDRs=["10.42.0.0/24"]
	I1018 10:35:32.530036       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 10:35:32.530936       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:35:32.530959       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:35:32.530966       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:35:32.530972       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:35:32.533204       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:35:32.538164       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:35:32.548467       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 10:35:32.551974       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 10:35:32.552057       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 10:35:32.558059       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:35:32.564776       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 10:35:32.573572       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 10:35:32.575129       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:35:32.582554       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 10:35:32.583129       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:35:32.602505       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 10:35:32.602632       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 10:35:32.602714       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-577403"
	I1018 10:35:32.602751       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [14dc8c7453d602e5f20c9fa3c8db77acd6b45e35376d72f17c815883ec9251aa] <==
	I1018 10:35:35.194027       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:35:35.278734       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:35:35.380403       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:35:35.380570       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 10:35:35.380749       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:35:35.463763       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:35:35.463816       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:35:35.472470       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:35:35.472949       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:35:35.473214       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:35:35.488199       1 config.go:200] "Starting service config controller"
	I1018 10:35:35.488223       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:35:35.488242       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:35:35.488247       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:35:35.488279       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:35:35.488283       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:35:35.500100       1 config.go:309] "Starting node config controller"
	I1018 10:35:35.505701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:35:35.505785       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:35:35.589048       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:35:35.589075       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 10:35:35.589095       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0f1f1338e6ba502a1aba0ce90d251d4f1047b223f7b3f05aaa749f0e0df377d] <==
	E1018 10:35:25.618272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 10:35:25.618387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 10:35:25.618537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 10:35:25.618645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 10:35:25.619749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 10:35:25.619885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 10:35:25.620056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 10:35:25.620428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 10:35:25.620726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 10:35:25.621377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 10:35:25.621397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 10:35:25.621432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 10:35:25.621450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 10:35:25.621466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 10:35:26.497438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 10:35:26.529144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 10:35:26.547693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 10:35:26.610502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 10:35:26.661776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 10:35:26.739033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 10:35:26.779658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 10:35:26.814845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 10:35:26.831426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 10:35:26.928788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 10:35:28.778341       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:35:28 newest-cni-577403 kubelet[1315]: I1018 10:35:28.846417    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1089664d1b28662b79948edd92c005b-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-577403\" (UID: \"a1089664d1b28662b79948edd92c005b\") " pod="kube-system/kube-controller-manager-newest-cni-577403"
	Oct 18 10:35:28 newest-cni-577403 kubelet[1315]: I1018 10:35:28.846463    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1089664d1b28662b79948edd92c005b-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-577403\" (UID: \"a1089664d1b28662b79948edd92c005b\") " pod="kube-system/kube-controller-manager-newest-cni-577403"
	Oct 18 10:35:28 newest-cni-577403 kubelet[1315]: I1018 10:35:28.846504    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1089664d1b28662b79948edd92c005b-kubeconfig\") pod \"kube-controller-manager-newest-cni-577403\" (UID: \"a1089664d1b28662b79948edd92c005b\") " pod="kube-system/kube-controller-manager-newest-cni-577403"
	Oct 18 10:35:28 newest-cni-577403 kubelet[1315]: I1018 10:35:28.846533    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1a5a9f76d3608e06c1181766376e7342-etcd-certs\") pod \"etcd-newest-cni-577403\" (UID: \"1a5a9f76d3608e06c1181766376e7342\") " pod="kube-system/etcd-newest-cni-577403"
	Oct 18 10:35:29 newest-cni-577403 kubelet[1315]: I1018 10:35:29.326281    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-577403" podStartSLOduration=1.3262620809999999 podStartE2EDuration="1.326262081s" podCreationTimestamp="2025-10-18 10:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:35:29.296468399 +0000 UTC m=+1.173833557" watchObservedRunningTime="2025-10-18 10:35:29.326262081 +0000 UTC m=+1.203627239"
	Oct 18 10:35:29 newest-cni-577403 kubelet[1315]: I1018 10:35:29.350004    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-577403" podStartSLOduration=1.349955897 podStartE2EDuration="1.349955897s" podCreationTimestamp="2025-10-18 10:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:35:29.328804093 +0000 UTC m=+1.206169276" watchObservedRunningTime="2025-10-18 10:35:29.349955897 +0000 UTC m=+1.227321055"
	Oct 18 10:35:29 newest-cni-577403 kubelet[1315]: I1018 10:35:29.389934    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-577403" podStartSLOduration=1.389913862 podStartE2EDuration="1.389913862s" podCreationTimestamp="2025-10-18 10:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:35:29.35554904 +0000 UTC m=+1.232914214" watchObservedRunningTime="2025-10-18 10:35:29.389913862 +0000 UTC m=+1.267279020"
	Oct 18 10:35:29 newest-cni-577403 kubelet[1315]: I1018 10:35:29.603858    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-577403" podStartSLOduration=1.6038410600000002 podStartE2EDuration="1.60384106s" podCreationTimestamp="2025-10-18 10:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:35:29.391566985 +0000 UTC m=+1.268932143" watchObservedRunningTime="2025-10-18 10:35:29.60384106 +0000 UTC m=+1.481206210"
	Oct 18 10:35:32 newest-cni-577403 kubelet[1315]: I1018 10:35:32.627534    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 10:35:32 newest-cni-577403 kubelet[1315]: I1018 10:35:32.628910    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: E1018 10:35:33.580496    1315 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-577403\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-577403' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: E1018 10:35:33.582467    1315 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-4twn2\" is forbidden: User \"system:node:newest-cni-577403\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-577403' and this object" podUID="060f019f-35b3-47a0-af70-f480829d1715" pod="kube-system/kube-proxy-4twn2"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: E1018 10:35:33.583973    1315 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-577403\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-577403' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: I1018 10:35:33.599852    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/060f019f-35b3-47a0-af70-f480829d1715-kube-proxy\") pod \"kube-proxy-4twn2\" (UID: \"060f019f-35b3-47a0-af70-f480829d1715\") " pod="kube-system/kube-proxy-4twn2"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: I1018 10:35:33.599904    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/060f019f-35b3-47a0-af70-f480829d1715-xtables-lock\") pod \"kube-proxy-4twn2\" (UID: \"060f019f-35b3-47a0-af70-f480829d1715\") " pod="kube-system/kube-proxy-4twn2"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: I1018 10:35:33.599925    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29jm\" (UniqueName: \"kubernetes.io/projected/59b45574-ece2-4376-aacf-8e87cb8f03e7-kube-api-access-f29jm\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: I1018 10:35:33.599948    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/060f019f-35b3-47a0-af70-f480829d1715-lib-modules\") pod \"kube-proxy-4twn2\" (UID: \"060f019f-35b3-47a0-af70-f480829d1715\") " pod="kube-system/kube-proxy-4twn2"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: I1018 10:35:33.599973    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59b45574-ece2-4376-aacf-8e87cb8f03e7-xtables-lock\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: I1018 10:35:33.599990    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59b45574-ece2-4376-aacf-8e87cb8f03e7-lib-modules\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: I1018 10:35:33.600009    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzd77\" (UniqueName: \"kubernetes.io/projected/060f019f-35b3-47a0-af70-f480829d1715-kube-api-access-wzd77\") pod \"kube-proxy-4twn2\" (UID: \"060f019f-35b3-47a0-af70-f480829d1715\") " pod="kube-system/kube-proxy-4twn2"
	Oct 18 10:35:33 newest-cni-577403 kubelet[1315]: I1018 10:35:33.600024    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59b45574-ece2-4376-aacf-8e87cb8f03e7-cni-cfg\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:34 newest-cni-577403 kubelet[1315]: I1018 10:35:34.829476    1315 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 10:35:35 newest-cni-577403 kubelet[1315]: W1018 10:35:35.087694    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/crio-dcbf0f1f5a9390f91bfd3e181825040531cdc8a8fb46b3639f11b97c219070ba WatchSource:0}: Error finding container dcbf0f1f5a9390f91bfd3e181825040531cdc8a8fb46b3639f11b97c219070ba: Status 404 returned error can't find the container with id dcbf0f1f5a9390f91bfd3e181825040531cdc8a8fb46b3639f11b97c219070ba
	Oct 18 10:35:35 newest-cni-577403 kubelet[1315]: I1018 10:35:35.675369    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dc6mn" podStartSLOduration=2.675349714 podStartE2EDuration="2.675349714s" podCreationTimestamp="2025-10-18 10:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:35:35.674922426 +0000 UTC m=+7.552287576" watchObservedRunningTime="2025-10-18 10:35:35.675349714 +0000 UTC m=+7.552714863"
	Oct 18 10:35:35 newest-cni-577403 kubelet[1315]: I1018 10:35:35.722934    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4twn2" podStartSLOduration=2.722914296 podStartE2EDuration="2.722914296s" podCreationTimestamp="2025-10-18 10:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:35:35.701259469 +0000 UTC m=+7.578624643" watchObservedRunningTime="2025-10-18 10:35:35.722914296 +0000 UTC m=+7.600279454"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-577403 -n newest-cni-577403
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-577403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-g5hjd storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-577403 describe pod coredns-66bc5c9577-g5hjd storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-577403 describe pod coredns-66bc5c9577-g5hjd storage-provisioner: exit status 1 (95.69015ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-g5hjd" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-577403 describe pod coredns-66bc5c9577-g5hjd storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-577403 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-577403 --alsologtostderr -v=1: exit status 80 (2.543829282s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-577403 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:35:55.924493  497265 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:35:55.924656  497265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:35:55.924665  497265 out.go:374] Setting ErrFile to fd 2...
	I1018 10:35:55.924670  497265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:35:55.924970  497265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:35:55.925292  497265 out.go:368] Setting JSON to false
	I1018 10:35:55.925340  497265 mustload.go:65] Loading cluster: newest-cni-577403
	I1018 10:35:55.925763  497265 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:55.926269  497265 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:55.949045  497265 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:55.949401  497265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:35:56.015885  497265 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 10:35:56.004273348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:35:56.016571  497265 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-577403 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 10:35:56.020167  497265 out.go:179] * Pausing node newest-cni-577403 ... 
	I1018 10:35:56.024146  497265 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:56.024547  497265 ssh_runner.go:195] Run: systemctl --version
	I1018 10:35:56.024603  497265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:56.052266  497265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:56.158234  497265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:35:56.173593  497265 pause.go:52] kubelet running: true
	I1018 10:35:56.173676  497265 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:35:56.552729  497265 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:35:56.552812  497265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:35:56.649273  497265 cri.go:89] found id: "143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3"
	I1018 10:35:56.649296  497265 cri.go:89] found id: "819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301"
	I1018 10:35:56.649301  497265 cri.go:89] found id: "4931d62ebebc151d36b33ceac56370520ce022b159f398ba2c6d4d5335fe5cd5"
	I1018 10:35:56.649305  497265 cri.go:89] found id: "8598a86fdd0b5578be0124e533f2578cdbca59b60d2e2c51ec223a9bceea0ced"
	I1018 10:35:56.649309  497265 cri.go:89] found id: "b81cb4d2f278c266341a3cd9b07f6427e26118aa0c261292dea5cf46666371e8"
	I1018 10:35:56.649313  497265 cri.go:89] found id: "f87e3ee83d2a07038569e0e133062e319fd5545af2a5f970168374c1227e8428"
	I1018 10:35:56.649316  497265 cri.go:89] found id: ""
	I1018 10:35:56.649373  497265 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:35:56.691525  497265 retry.go:31] will retry after 355.738367ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:35:56Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:35:57.048086  497265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:35:57.065394  497265 pause.go:52] kubelet running: false
	I1018 10:35:57.065470  497265 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:35:57.392643  497265 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:35:57.392738  497265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:35:57.515531  497265 cri.go:89] found id: "143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3"
	I1018 10:35:57.515552  497265 cri.go:89] found id: "819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301"
	I1018 10:35:57.515558  497265 cri.go:89] found id: "4931d62ebebc151d36b33ceac56370520ce022b159f398ba2c6d4d5335fe5cd5"
	I1018 10:35:57.515561  497265 cri.go:89] found id: "8598a86fdd0b5578be0124e533f2578cdbca59b60d2e2c51ec223a9bceea0ced"
	I1018 10:35:57.515565  497265 cri.go:89] found id: "b81cb4d2f278c266341a3cd9b07f6427e26118aa0c261292dea5cf46666371e8"
	I1018 10:35:57.515569  497265 cri.go:89] found id: "f87e3ee83d2a07038569e0e133062e319fd5545af2a5f970168374c1227e8428"
	I1018 10:35:57.515572  497265 cri.go:89] found id: ""
	I1018 10:35:57.515621  497265 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:35:57.532173  497265 retry.go:31] will retry after 503.366575ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:35:57Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:35:58.035790  497265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:35:58.052206  497265 pause.go:52] kubelet running: false
	I1018 10:35:58.052270  497265 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:35:58.265311  497265 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:35:58.265407  497265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:35:58.375189  497265 cri.go:89] found id: "143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3"
	I1018 10:35:58.375209  497265 cri.go:89] found id: "819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301"
	I1018 10:35:58.375214  497265 cri.go:89] found id: "4931d62ebebc151d36b33ceac56370520ce022b159f398ba2c6d4d5335fe5cd5"
	I1018 10:35:58.375218  497265 cri.go:89] found id: "8598a86fdd0b5578be0124e533f2578cdbca59b60d2e2c51ec223a9bceea0ced"
	I1018 10:35:58.375222  497265 cri.go:89] found id: "b81cb4d2f278c266341a3cd9b07f6427e26118aa0c261292dea5cf46666371e8"
	I1018 10:35:58.375225  497265 cri.go:89] found id: "f87e3ee83d2a07038569e0e133062e319fd5545af2a5f970168374c1227e8428"
	I1018 10:35:58.375228  497265 cri.go:89] found id: ""
	I1018 10:35:58.375277  497265 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:35:58.392898  497265 out.go:203] 
	W1018 10:35:58.395840  497265 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:35:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:35:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 10:35:58.395871  497265 out.go:285] * 
	* 
	W1018 10:35:58.404325  497265 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 10:35:58.407073  497265 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-577403 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-577403
helpers_test.go:243: (dbg) docker inspect newest-cni-577403:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07",
	        "Created": "2025-10-18T10:34:57.122600154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495518,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:35:40.132542858Z",
	            "FinishedAt": "2025-10-18T10:35:39.238507058Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/hosts",
	        "LogPath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07-json.log",
	        "Name": "/newest-cni-577403",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-577403:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-577403",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07",
	                "LowerDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-577403",
	                "Source": "/var/lib/docker/volumes/newest-cni-577403/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-577403",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-577403",
	                "name.minikube.sigs.k8s.io": "newest-cni-577403",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "87320a71c9483a008f6dab65565a3aee10e7da8f0fc1e9aa9f5b4ecc201a6c26",
	            "SandboxKey": "/var/run/docker/netns/87320a71c948",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-577403": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:27:c0:76:98:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4944dc29f48a85d80603faba3e0eb9e1b1723b9d4244f496af940a2c5ae27592",
	                    "EndpointID": "f398099f91dd18a43b4f5278aba5bd74f0e8ac8e7a60cf0de4c87bb4e7564545",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-577403",
	                        "8f5c98145c70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-577403 -n newest-cni-577403
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-577403 -n newest-cni-577403: exit status 2 (431.984006ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-577403 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-577403 logs -n 25: (1.726948841s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p default-k8s-diff-port-715182 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-101897 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-715182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-101897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-715182 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-922359                                                                                                                                                                                                               │ disable-driver-mounts-922359 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ embed-certs-101897 image list --format=json                                                                                                                                                                                                   │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p embed-certs-101897 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-577403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ stop    │ -p newest-cni-577403 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-577403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ newest-cni-577403 image list --format=json                                                                                                                                                                                                    │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ pause   │ -p newest-cni-577403 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-027087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:35:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:35:39.801164  495391 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:35:39.801334  495391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:35:39.801345  495391 out.go:374] Setting ErrFile to fd 2...
	I1018 10:35:39.801370  495391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:35:39.801669  495391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:35:39.802108  495391 out.go:368] Setting JSON to false
	I1018 10:35:39.803097  495391 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8290,"bootTime":1760775450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:35:39.803164  495391 start.go:141] virtualization:  
	I1018 10:35:39.806402  495391 out.go:179] * [newest-cni-577403] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:35:39.810328  495391 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:35:39.810427  495391 notify.go:220] Checking for updates...
	I1018 10:35:39.816271  495391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:35:39.819256  495391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:39.822114  495391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:35:39.825029  495391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:35:39.828055  495391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:35:39.831505  495391 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:39.832111  495391 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:35:39.863036  495391 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:35:39.863166  495391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:35:39.927173  495391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:35:39.917731447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:35:39.927286  495391 docker.go:318] overlay module found
	I1018 10:35:39.930366  495391 out.go:179] * Using the docker driver based on existing profile
	I1018 10:35:39.933123  495391 start.go:305] selected driver: docker
	I1018 10:35:39.933144  495391 start.go:925] validating driver "docker" against &{Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:39.933390  495391 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:35:39.934106  495391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:35:40.005381  495391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:35:39.995619077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:35:40.005732  495391 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 10:35:40.005761  495391 cni.go:84] Creating CNI manager for ""
	I1018 10:35:40.005822  495391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:35:40.005906  495391 start.go:349] cluster config:
	{Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:40.026453  495391 out.go:179] * Starting "newest-cni-577403" primary control-plane node in "newest-cni-577403" cluster
	I1018 10:35:40.033077  495391 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:35:40.041033  495391 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:35:40.053031  495391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:35:40.053164  495391 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:35:40.053229  495391 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:35:40.053243  495391 cache.go:58] Caching tarball of preloaded images
	I1018 10:35:40.053329  495391 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:35:40.053342  495391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:35:40.053461  495391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/config.json ...
	I1018 10:35:40.074618  495391 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:35:40.074639  495391 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:35:40.074659  495391 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:35:40.074684  495391 start.go:360] acquireMachinesLock for newest-cni-577403: {Name:mk1e4df99ad9f1535f8fd365f2c9b2df285e2ff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:35:40.074753  495391 start.go:364] duration metric: took 49.289µs to acquireMachinesLock for "newest-cni-577403"
	I1018 10:35:40.074783  495391 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:35:40.074789  495391 fix.go:54] fixHost starting: 
	I1018 10:35:40.075048  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:40.093765  495391 fix.go:112] recreateIfNeeded on newest-cni-577403: state=Stopped err=<nil>
	W1018 10:35:40.093803  495391 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 10:35:38.519228  487845 node_ready.go:57] node "no-preload-027087" has "Ready":"False" status (will retry)
	W1018 10:35:41.013770  487845 node_ready.go:57] node "no-preload-027087" has "Ready":"False" status (will retry)
	I1018 10:35:40.097109  495391 out.go:252] * Restarting existing docker container for "newest-cni-577403" ...
	I1018 10:35:40.097247  495391 cli_runner.go:164] Run: docker start newest-cni-577403
	I1018 10:35:40.367033  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:40.393168  495391 kic.go:430] container "newest-cni-577403" state is running.
	I1018 10:35:40.395377  495391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:40.416009  495391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/config.json ...
	I1018 10:35:40.416240  495391 machine.go:93] provisionDockerMachine start ...
	I1018 10:35:40.416320  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:40.437309  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:40.437870  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:40.437889  495391 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:35:40.438479  495391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45776->127.0.0.1:33459: read: connection reset by peer
	I1018 10:35:43.596981  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-577403
	
	I1018 10:35:43.597011  495391 ubuntu.go:182] provisioning hostname "newest-cni-577403"
	I1018 10:35:43.597080  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:43.615432  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:43.615750  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:43.615768  495391 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-577403 && echo "newest-cni-577403" | sudo tee /etc/hostname
	I1018 10:35:43.783111  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-577403
	
	I1018 10:35:43.783191  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:43.800707  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:43.801011  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:43.801033  495391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-577403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-577403/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-577403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:35:43.955118  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:35:43.955193  495391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:35:43.955246  495391 ubuntu.go:190] setting up certificates
	I1018 10:35:43.955278  495391 provision.go:84] configureAuth start
	I1018 10:35:43.955363  495391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:43.972794  495391 provision.go:143] copyHostCerts
	I1018 10:35:43.972869  495391 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:35:43.972939  495391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:35:43.973068  495391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:35:43.973176  495391 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:35:43.973210  495391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:35:43.973244  495391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:35:43.973382  495391 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:35:43.973391  495391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:35:43.973423  495391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:35:43.973513  495391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.newest-cni-577403 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-577403]
	I1018 10:35:44.275227  495391 provision.go:177] copyRemoteCerts
	I1018 10:35:44.275294  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:35:44.275338  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.300095  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:44.405483  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:35:44.424129  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 10:35:44.442030  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:35:44.459902  495391 provision.go:87] duration metric: took 504.571348ms to configureAuth
	I1018 10:35:44.459934  495391 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:35:44.460170  495391 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:44.460313  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.477530  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:44.477853  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:44.477879  495391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:35:44.768677  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:35:44.768701  495391 machine.go:96] duration metric: took 4.352443773s to provisionDockerMachine
	I1018 10:35:44.768711  495391 start.go:293] postStartSetup for "newest-cni-577403" (driver="docker")
	I1018 10:35:44.768722  495391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:35:44.768802  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:35:44.768842  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.788260  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	W1018 10:35:43.516141  487845 node_ready.go:57] node "no-preload-027087" has "Ready":"False" status (will retry)
	I1018 10:35:45.517013  487845 node_ready.go:49] node "no-preload-027087" is "Ready"
	I1018 10:35:45.517040  487845 node_ready.go:38] duration metric: took 15.507208383s for node "no-preload-027087" to be "Ready" ...
	I1018 10:35:45.517053  487845 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:35:45.517113  487845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:35:45.536180  487845 api_server.go:72] duration metric: took 17.469094556s to wait for apiserver process to appear ...
	I1018 10:35:45.536208  487845 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:35:45.536229  487845 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:35:45.550548  487845 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 10:35:45.551795  487845 api_server.go:141] control plane version: v1.34.1
	I1018 10:35:45.551819  487845 api_server.go:131] duration metric: took 15.604409ms to wait for apiserver health ...
	I1018 10:35:45.551830  487845 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:35:45.555402  487845 system_pods.go:59] 8 kube-system pods found
	I1018 10:35:45.555437  487845 system_pods.go:61] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:35:45.555444  487845 system_pods.go:61] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:45.555450  487845 system_pods.go:61] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:45.555454  487845 system_pods.go:61] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:45.555459  487845 system_pods.go:61] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:45.555464  487845 system_pods.go:61] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:45.555473  487845 system_pods.go:61] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:45.555480  487845 system_pods.go:61] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:35:45.555489  487845 system_pods.go:74] duration metric: took 3.652918ms to wait for pod list to return data ...
	I1018 10:35:45.555502  487845 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:35:45.559334  487845 default_sa.go:45] found service account: "default"
	I1018 10:35:45.559355  487845 default_sa.go:55] duration metric: took 3.846538ms for default service account to be created ...
	I1018 10:35:45.559365  487845 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:35:45.562454  487845 system_pods.go:86] 8 kube-system pods found
	I1018 10:35:45.562490  487845 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:35:45.562497  487845 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:45.562504  487845 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:45.562508  487845 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:45.562513  487845 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:45.562517  487845 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:45.562522  487845 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:45.562529  487845 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:35:45.562547  487845 retry.go:31] will retry after 246.903091ms: missing components: kube-dns
	I1018 10:35:45.834464  487845 system_pods.go:86] 8 kube-system pods found
	I1018 10:35:45.834504  487845 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:35:45.834511  487845 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:45.834517  487845 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:45.834522  487845 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:45.834526  487845 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:45.834530  487845 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:45.834533  487845 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:45.834542  487845 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:35:45.834557  487845 retry.go:31] will retry after 243.620287ms: missing components: kube-dns
	I1018 10:35:46.084007  487845 system_pods.go:86] 8 kube-system pods found
	I1018 10:35:46.084098  487845 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Running
	I1018 10:35:46.084121  487845 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:46.084143  487845 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:46.084164  487845 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:46.084185  487845 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:46.084204  487845 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:46.084224  487845 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:46.084244  487845 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Running
	I1018 10:35:46.084268  487845 system_pods.go:126] duration metric: took 524.896847ms to wait for k8s-apps to be running ...
	I1018 10:35:46.084289  487845 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:35:46.084365  487845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:35:46.100826  487845 system_svc.go:56] duration metric: took 16.526053ms WaitForService to wait for kubelet
	I1018 10:35:46.100852  487845 kubeadm.go:586] duration metric: took 18.033771789s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:35:46.100870  487845 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:35:46.104372  487845 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:35:46.104401  487845 node_conditions.go:123] node cpu capacity is 2
	I1018 10:35:46.104413  487845 node_conditions.go:105] duration metric: took 3.538407ms to run NodePressure ...
	I1018 10:35:46.104426  487845 start.go:241] waiting for startup goroutines ...
	I1018 10:35:46.104434  487845 start.go:246] waiting for cluster config update ...
	I1018 10:35:46.104446  487845 start.go:255] writing updated cluster config ...
	I1018 10:35:46.104763  487845 ssh_runner.go:195] Run: rm -f paused
	I1018 10:35:46.108852  487845 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:35:46.112794  487845 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.118588  487845 pod_ready.go:94] pod "coredns-66bc5c9577-wt4wd" is "Ready"
	I1018 10:35:46.118722  487845 pod_ready.go:86] duration metric: took 5.859296ms for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.121348  487845 pod_ready.go:83] waiting for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.130565  487845 pod_ready.go:94] pod "etcd-no-preload-027087" is "Ready"
	I1018 10:35:46.130641  487845 pod_ready.go:86] duration metric: took 9.222341ms for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.134263  487845 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.144400  487845 pod_ready.go:94] pod "kube-apiserver-no-preload-027087" is "Ready"
	I1018 10:35:46.144471  487845 pod_ready.go:86] duration metric: took 10.141532ms for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.147094  487845 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.519901  487845 pod_ready.go:94] pod "kube-controller-manager-no-preload-027087" is "Ready"
	I1018 10:35:46.519934  487845 pod_ready.go:86] duration metric: took 372.764875ms for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:44.897372  495391 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:35:44.901120  495391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:35:44.901148  495391 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:35:44.901159  495391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:35:44.901243  495391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:35:44.901323  495391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:35:44.901441  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:35:44.909035  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:35:44.930516  495391 start.go:296] duration metric: took 161.788667ms for postStartSetup
	I1018 10:35:44.930615  495391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:35:44.930669  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.948531  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:45.062980  495391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:35:45.075232  495391 fix.go:56] duration metric: took 5.000434531s for fixHost
	I1018 10:35:45.075257  495391 start.go:83] releasing machines lock for "newest-cni-577403", held for 5.000496094s
	I1018 10:35:45.075345  495391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:45.122222  495391 ssh_runner.go:195] Run: cat /version.json
	I1018 10:35:45.122300  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:45.133589  495391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:35:45.133670  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:45.178589  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:45.193667  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:45.377497  495391 ssh_runner.go:195] Run: systemctl --version
	I1018 10:35:45.492016  495391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:35:45.558708  495391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:35:45.565479  495391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:35:45.565553  495391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:35:45.579827  495391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:35:45.579851  495391 start.go:495] detecting cgroup driver to use...
	I1018 10:35:45.579883  495391 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:35:45.579941  495391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:35:45.599962  495391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:35:45.615542  495391 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:35:45.615607  495391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:35:45.631974  495391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:35:45.645768  495391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:35:45.850945  495391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:35:46.040544  495391 docker.go:234] disabling docker service ...
	I1018 10:35:46.040672  495391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:35:46.058366  495391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:35:46.072599  495391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:35:46.219042  495391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:35:46.339609  495391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:35:46.352898  495391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:35:46.367774  495391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:35:46.367924  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.376873  495391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:35:46.376976  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.385883  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.394716  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.410171  495391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:35:46.418435  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.427933  495391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.436348  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.445365  495391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:35:46.453322  495391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:35:46.460779  495391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:46.597365  495391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:35:46.749245  495391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:35:46.749368  495391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:35:46.753646  495391 start.go:563] Will wait 60s for crictl version
	I1018 10:35:46.753713  495391 ssh_runner.go:195] Run: which crictl
	I1018 10:35:46.757613  495391 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:35:46.783482  495391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:35:46.783572  495391 ssh_runner.go:195] Run: crio --version
	I1018 10:35:46.814334  495391 ssh_runner.go:195] Run: crio --version
	I1018 10:35:46.847943  495391 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:35:46.850766  495391 cli_runner.go:164] Run: docker network inspect newest-cni-577403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:35:46.867061  495391 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:35:46.870908  495391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:35:46.885297  495391 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 10:35:46.714713  487845 pod_ready.go:83] waiting for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.112811  487845 pod_ready.go:94] pod "kube-proxy-s87k4" is "Ready"
	I1018 10:35:47.112845  487845 pod_ready.go:86] duration metric: took 398.049543ms for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.313325  487845 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.713727  487845 pod_ready.go:94] pod "kube-scheduler-no-preload-027087" is "Ready"
	I1018 10:35:47.713752  487845 pod_ready.go:86] duration metric: took 400.404874ms for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.713763  487845 pod_ready.go:40] duration metric: took 1.604831389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:35:47.799613  487845 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:35:47.806148  487845 out.go:179] * Done! kubectl is now configured to use "no-preload-027087" cluster and "default" namespace by default
	I1018 10:35:46.888166  495391 kubeadm.go:883] updating cluster {Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:35:46.888290  495391 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:35:46.888365  495391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:35:46.930436  495391 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:35:46.930456  495391 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:35:46.930517  495391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:35:46.959600  495391 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:35:46.959678  495391 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:35:46.959700  495391 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:35:46.959834  495391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-577403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:35:46.959960  495391 ssh_runner.go:195] Run: crio config
	I1018 10:35:47.017741  495391 cni.go:84] Creating CNI manager for ""
	I1018 10:35:47.017759  495391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:35:47.017777  495391 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 10:35:47.017803  495391 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-577403 NodeName:newest-cni-577403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:35:47.017948  495391 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-577403"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:35:47.018023  495391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:35:47.027031  495391 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:35:47.027117  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:35:47.035837  495391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 10:35:47.049534  495391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:35:47.063007  495391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 10:35:47.076433  495391 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:35:47.080306  495391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:35:47.090375  495391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:47.204325  495391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:47.225369  495391 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403 for IP: 192.168.85.2
	I1018 10:35:47.225436  495391 certs.go:195] generating shared ca certs ...
	I1018 10:35:47.225467  495391 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:47.225631  495391 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:35:47.225720  495391 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:35:47.225752  495391 certs.go:257] generating profile certs ...
	I1018 10:35:47.225860  495391 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/client.key
	I1018 10:35:47.225960  495391 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key.da20550e
	I1018 10:35:47.226032  495391 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key
	I1018 10:35:47.226191  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:35:47.226258  495391 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:35:47.226290  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:35:47.226337  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:35:47.226389  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:35:47.226432  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:35:47.226504  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:35:47.227115  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:35:47.249725  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:35:47.270527  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:35:47.292026  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:35:47.315038  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 10:35:47.335134  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:35:47.354499  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:35:47.381269  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:35:47.402919  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:35:47.434336  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:35:47.453622  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:35:47.473935  495391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:35:47.487419  495391 ssh_runner.go:195] Run: openssl version
	I1018 10:35:47.500286  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:35:47.517526  495391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:35:47.522408  495391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:35:47.522527  495391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:35:47.568019  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:35:47.577990  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:35:47.586709  495391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:35:47.590959  495391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:35:47.591035  495391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:35:47.632731  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:35:47.641420  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:35:47.650459  495391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:47.654271  495391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:47.654339  495391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:47.696977  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:35:47.705648  495391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:35:47.709701  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:35:47.758867  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:35:47.806936  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:35:47.933964  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:35:48.116339  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:35:48.246261  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:35:48.327624  495391 kubeadm.go:400] StartCluster: {Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:48.327723  495391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:35:48.327796  495391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:35:48.380000  495391 cri.go:89] found id: "4931d62ebebc151d36b33ceac56370520ce022b159f398ba2c6d4d5335fe5cd5"
	I1018 10:35:48.380023  495391 cri.go:89] found id: "8598a86fdd0b5578be0124e533f2578cdbca59b60d2e2c51ec223a9bceea0ced"
	I1018 10:35:48.380029  495391 cri.go:89] found id: "b81cb4d2f278c266341a3cd9b07f6427e26118aa0c261292dea5cf46666371e8"
	I1018 10:35:48.380033  495391 cri.go:89] found id: "f87e3ee83d2a07038569e0e133062e319fd5545af2a5f970168374c1227e8428"
	I1018 10:35:48.380036  495391 cri.go:89] found id: ""
	I1018 10:35:48.380090  495391 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:35:48.396198  495391 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:35:48Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:35:48.396286  495391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:35:48.426352  495391 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:35:48.426372  495391 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:35:48.426444  495391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:35:48.442293  495391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:35:48.442875  495391 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-577403" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:48.443165  495391 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-293333/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-577403" cluster setting kubeconfig missing "newest-cni-577403" context setting]
	I1018 10:35:48.443641  495391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:48.446016  495391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:35:48.467094  495391 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 10:35:48.467128  495391 kubeadm.go:601] duration metric: took 40.749819ms to restartPrimaryControlPlane
	I1018 10:35:48.467138  495391 kubeadm.go:402] duration metric: took 139.524326ms to StartCluster
	I1018 10:35:48.467152  495391 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:48.467216  495391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:48.468223  495391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:48.468465  495391 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:35:48.468802  495391 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:35:48.468876  495391 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-577403"
	I1018 10:35:48.468892  495391 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-577403"
	W1018 10:35:48.468903  495391 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:35:48.468923  495391 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:48.469552  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.469959  495391 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:48.470017  495391 addons.go:69] Setting default-storageclass=true in profile "newest-cni-577403"
	I1018 10:35:48.470036  495391 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-577403"
	I1018 10:35:48.470298  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.472740  495391 addons.go:69] Setting dashboard=true in profile "newest-cni-577403"
	I1018 10:35:48.472770  495391 addons.go:238] Setting addon dashboard=true in "newest-cni-577403"
	W1018 10:35:48.472778  495391 addons.go:247] addon dashboard should already be in state true
	I1018 10:35:48.472812  495391 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:48.473342  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.473871  495391 out.go:179] * Verifying Kubernetes components...
	I1018 10:35:48.480751  495391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:48.536239  495391 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 10:35:48.536368  495391 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:35:48.539243  495391 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:48.539278  495391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:35:48.539248  495391 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 10:35:48.539348  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:48.541507  495391 addons.go:238] Setting addon default-storageclass=true in "newest-cni-577403"
	W1018 10:35:48.541533  495391 addons.go:247] addon default-storageclass should already be in state true
	I1018 10:35:48.541557  495391 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:48.541970  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.542272  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 10:35:48.542291  495391 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 10:35:48.542346  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:48.586169  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:48.606874  495391 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:48.606900  495391 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:35:48.606963  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:48.616866  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:48.642774  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:48.785117  495391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:48.820842  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 10:35:48.820864  495391 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 10:35:48.845950  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 10:35:48.845972  495391 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 10:35:48.869686  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 10:35:48.869707  495391 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 10:35:48.899843  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 10:35:48.899925  495391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 10:35:48.922442  495391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:48.951187  495391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:48.992259  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 10:35:48.992336  495391 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 10:35:49.090634  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 10:35:49.090658  495391 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 10:35:49.154528  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 10:35:49.154552  495391 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 10:35:49.196258  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 10:35:49.196290  495391 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 10:35:49.219095  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:35:49.219135  495391 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 10:35:49.245405  495391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:35:54.520126  495391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.734915278s)
	I1018 10:35:54.520198  495391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.597676781s)
	I1018 10:35:54.520518  495391 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.569265757s)
	I1018 10:35:54.520558  495391 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:35:54.520617  495391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:35:54.520760  495391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.2752767s)
	I1018 10:35:54.523991  495391 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-577403 addons enable metrics-server
	
	I1018 10:35:54.544161  495391 api_server.go:72] duration metric: took 6.07565302s to wait for apiserver process to appear ...
	I1018 10:35:54.544183  495391 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:35:54.544201  495391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:35:54.557901  495391 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 10:35:54.557975  495391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 10:35:54.567339  495391 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 10:35:54.570380  495391 addons.go:514] duration metric: took 6.101573061s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 10:35:55.045029  495391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:35:55.053858  495391 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 10:35:55.055336  495391 api_server.go:141] control plane version: v1.34.1
	I1018 10:35:55.055366  495391 api_server.go:131] duration metric: took 511.176614ms to wait for apiserver health ...
	I1018 10:35:55.055377  495391 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:35:55.060646  495391 system_pods.go:59] 8 kube-system pods found
	I1018 10:35:55.060696  495391 system_pods.go:61] "coredns-66bc5c9577-g5hjd" [d8506151-9057-4d64-9951-94bfc8e48157] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:35:55.060706  495391 system_pods.go:61] "etcd-newest-cni-577403" [9061973a-4cc4-4701-ac68-b463a5c36efe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:35:55.060711  495391 system_pods.go:61] "kindnet-dc6mn" [59b45574-ece2-4376-aacf-8e87cb8f03e7] Running
	I1018 10:35:55.060719  495391 system_pods.go:61] "kube-apiserver-newest-cni-577403" [bfab2b0b-ff85-4eb8-8e64-157577d51881] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:35:55.060730  495391 system_pods.go:61] "kube-controller-manager-newest-cni-577403" [0ffcd3ef-9adb-437c-9c04-32638238a83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:35:55.060738  495391 system_pods.go:61] "kube-proxy-4twn2" [060f019f-35b3-47a0-af70-f480829d1715] Running
	I1018 10:35:55.060744  495391 system_pods.go:61] "kube-scheduler-newest-cni-577403" [6c0fc2df-7ebe-4634-828f-7febca31dffc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:35:55.060767  495391 system_pods.go:61] "storage-provisioner" [2f727e8b-afd6-4e3e-96f3-a9d649d239ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:35:55.060774  495391 system_pods.go:74] duration metric: took 5.391812ms to wait for pod list to return data ...
	I1018 10:35:55.060789  495391 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:35:55.063384  495391 default_sa.go:45] found service account: "default"
	I1018 10:35:55.063414  495391 default_sa.go:55] duration metric: took 2.617329ms for default service account to be created ...
	I1018 10:35:55.063436  495391 kubeadm.go:586] duration metric: took 6.594929328s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 10:35:55.063456  495391 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:35:55.068366  495391 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:35:55.068419  495391 node_conditions.go:123] node cpu capacity is 2
	I1018 10:35:55.068499  495391 node_conditions.go:105] duration metric: took 4.971713ms to run NodePressure ...
	I1018 10:35:55.068518  495391 start.go:241] waiting for startup goroutines ...
	I1018 10:35:55.068526  495391 start.go:246] waiting for cluster config update ...
	I1018 10:35:55.068541  495391 start.go:255] writing updated cluster config ...
	I1018 10:35:55.068886  495391 ssh_runner.go:195] Run: rm -f paused
	I1018 10:35:55.161466  495391 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:35:55.164855  495391 out.go:179] * Done! kubectl is now configured to use "newest-cni-577403" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.722269945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.72857641Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-4twn2/POD" id=89fbde2c-f237-4495-b75c-f489aebe006c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.728658421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.738181436Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=937469e2-fa77-4b45-8528-97c197e2783e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.738660514Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=89fbde2c-f237-4495-b75c-f489aebe006c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.748957495Z" level=info msg="Ran pod sandbox cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739 with infra container: kube-system/kube-proxy-4twn2/POD" id=89fbde2c-f237-4495-b75c-f489aebe006c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.753073796Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d0583c7f-c513-4f0a-bdec-996f9535630f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.754483939Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bd88f553-82da-4fbf-86b6-60c61384297d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.756177211Z" level=info msg="Creating container: kube-system/kube-proxy-4twn2/kube-proxy" id=8d7d105e-7bb5-42af-a59e-03527a87b07f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.756636835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.771221795Z" level=info msg="Ran pod sandbox 8406f7ff2c8baf6de3d6f07b4655d6c5fff6fbac43b6bbd56ca319d362e9c840 with infra container: kube-system/kindnet-dc6mn/POD" id=937469e2-fa77-4b45-8528-97c197e2783e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.77847681Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=833b4518-46ec-4907-ab1e-8d3ed99db56d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.780127652Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5c712c8b-dd50-4022-9676-f260163e8038 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.782036911Z" level=info msg="Creating container: kube-system/kindnet-dc6mn/kindnet-cni" id=218bdadc-309f-4a15-a3cc-442fb21a9591 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.782778976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.802625744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.804047792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.807146259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.812610072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.844227948Z" level=info msg="Created container 143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3: kube-system/kindnet-dc6mn/kindnet-cni" id=218bdadc-309f-4a15-a3cc-442fb21a9591 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.845171017Z" level=info msg="Starting container: 143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3" id=87806a7e-5dc4-427f-a573-dd409fe0d1eb name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.850961669Z" level=info msg="Started container" PID=1056 containerID=143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3 description=kube-system/kindnet-dc6mn/kindnet-cni id=87806a7e-5dc4-427f-a573-dd409fe0d1eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=8406f7ff2c8baf6de3d6f07b4655d6c5fff6fbac43b6bbd56ca319d362e9c840
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.853545815Z" level=info msg="Created container 819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301: kube-system/kube-proxy-4twn2/kube-proxy" id=8d7d105e-7bb5-42af-a59e-03527a87b07f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.856339713Z" level=info msg="Starting container: 819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301" id=e1338dbd-9524-4d7e-b743-2d212a8254ae name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.869702766Z" level=info msg="Started container" PID=1057 containerID=819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301 description=kube-system/kube-proxy-4twn2/kube-proxy id=e1338dbd-9524-4d7e-b743-2d212a8254ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	143638a5b7bf8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   8406f7ff2c8ba       kindnet-dc6mn                               kube-system
	819fe87a1d42b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   cd5598ce8b18d       kube-proxy-4twn2                            kube-system
	4931d62ebebc1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   52453295cd691       kube-apiserver-newest-cni-577403            kube-system
	8598a86fdd0b5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   899be4382bb59       etcd-newest-cni-577403                      kube-system
	b81cb4d2f278c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   3c895e7c8388d       kube-scheduler-newest-cni-577403            kube-system
	f87e3ee83d2a0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   06713086276c6       kube-controller-manager-newest-cni-577403   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-577403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-577403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=newest-cni-577403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_35_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:35:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-577403
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:35:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:35:53 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:35:53 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:35:53 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 10:35:53 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-577403
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                094f3965-d36a-4b5c-959d-94a9f33348db
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-577403                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-dc6mn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-577403             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-577403    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-4twn2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-577403             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Warning  CgroupV1                 40s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node newest-cni-577403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node newest-cni-577403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node newest-cni-577403 status is now: NodeHasSufficientPID
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-577403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-577403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-577403 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-577403 event: Registered Node newest-cni-577403 in Controller
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-577403 event: Registered Node newest-cni-577403 in Controller
	
	
	==> dmesg <==
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8598a86fdd0b5578be0124e533f2578cdbca59b60d2e2c51ec223a9bceea0ced] <==
	{"level":"warn","ts":"2025-10-18T10:35:51.856810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.876082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.899126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.913164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.930064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.945491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.961970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.978643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.994264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.020741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.035229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.047355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.063721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.080945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.096665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.114493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.132463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.146257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.165938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.182427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.201460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.227832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.243255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.263078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.330616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:00 up  2:18,  0 user,  load average: 4.18, 4.27, 3.38
	Linux newest-cni-577403 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3] <==
	I1018 10:35:54.013066       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:35:54.013373       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 10:35:54.013514       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:35:54.013528       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:35:54.013544       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:35:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:35:54.309276       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:35:54.309320       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:35:54.309334       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:35:54.309812       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [4931d62ebebc151d36b33ceac56370520ce022b159f398ba2c6d4d5335fe5cd5] <==
	I1018 10:35:53.240148       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 10:35:53.240201       1 policy_source.go:240] refreshing policies
	I1018 10:35:53.249266       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 10:35:53.275520       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:35:53.314458       1 aggregator.go:171] initial CRD sync complete...
	I1018 10:35:53.314486       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 10:35:53.314495       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 10:35:53.314501       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:35:53.381316       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:35:53.383095       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 10:35:53.383395       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 10:35:53.383518       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 10:35:53.388954       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:35:53.570426       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:35:53.934683       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:35:54.042176       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:35:54.132220       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:35:54.253950       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:35:54.292968       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:35:54.409615       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.128.13"}
	I1018 10:35:54.433666       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.144.59"}
	I1018 10:35:56.659855       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:35:56.702187       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:35:56.845725       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:35:57.095976       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [f87e3ee83d2a07038569e0e133062e319fd5545af2a5f970168374c1227e8428] <==
	I1018 10:35:56.590744       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 10:35:56.591192       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:35:56.591775       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 10:35:56.591923       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:35:56.592104       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 10:35:56.601680       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:35:56.603362       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 10:35:56.603413       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:35:56.606080       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 10:35:56.607310       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 10:35:56.607695       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 10:35:56.607756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 10:35:56.618879       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:35:56.618906       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:35:56.618914       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:35:56.634740       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 10:35:56.635895       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 10:35:56.635951       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 10:35:56.636915       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 10:35:56.636961       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 10:35:56.638955       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:35:56.639020       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 10:35:56.639074       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:35:56.652129       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:35:56.653647       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301] <==
	I1018 10:35:54.079392       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:35:54.363016       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:35:54.463866       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:35:54.463911       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 10:35:54.463988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:35:54.585154       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:35:54.585363       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:35:54.601782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:35:54.602138       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:35:54.602183       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:35:54.604446       1 config.go:200] "Starting service config controller"
	I1018 10:35:54.604520       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:35:54.604566       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:35:54.604596       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:35:54.604637       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:35:54.604665       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:35:54.605399       1 config.go:309] "Starting node config controller"
	I1018 10:35:54.605454       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:35:54.605484       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:35:54.704693       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 10:35:54.704789       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:35:54.704699       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b81cb4d2f278c266341a3cd9b07f6427e26118aa0c261292dea5cf46666371e8] <==
	I1018 10:35:50.287126       1 serving.go:386] Generated self-signed cert in-memory
	W1018 10:35:53.125930       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 10:35:53.125974       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 10:35:53.125985       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 10:35:53.125992       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 10:35:53.295966       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:35:53.295997       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:35:53.328059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:35:53.332002       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:35:53.335661       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:35:53.332031       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:35:53.437068       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:35:51 newest-cni-577403 kubelet[727]: E1018 10:35:51.306406     727 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-577403\" not found" node="newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.062714     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: E1018 10:35:53.330935     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-577403\" already exists" pod="kube-system/etcd-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.330975     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.341695     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.341802     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.341846     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.349340     727 apiserver.go:52] "Watching apiserver"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.349699     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: E1018 10:35:53.435187     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-577403\" already exists" pod="kube-system/kube-apiserver-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.441459     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.468172     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: E1018 10:35:53.476054     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-577403\" already exists" pod="kube-system/kube-controller-manager-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.476329     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: E1018 10:35:53.494803     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-577403\" already exists" pod="kube-system/kube-scheduler-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551200     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/060f019f-35b3-47a0-af70-f480829d1715-xtables-lock\") pod \"kube-proxy-4twn2\" (UID: \"060f019f-35b3-47a0-af70-f480829d1715\") " pod="kube-system/kube-proxy-4twn2"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551314     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59b45574-ece2-4376-aacf-8e87cb8f03e7-cni-cfg\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551338     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59b45574-ece2-4376-aacf-8e87cb8f03e7-xtables-lock\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551355     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59b45574-ece2-4376-aacf-8e87cb8f03e7-lib-modules\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551382     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/060f019f-35b3-47a0-af70-f480829d1715-lib-modules\") pod \"kube-proxy-4twn2\" (UID: \"060f019f-35b3-47a0-af70-f480829d1715\") " pod="kube-system/kube-proxy-4twn2"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.587298     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: W1018 10:35:53.746049     727 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/crio-cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739 WatchSource:0}: Error finding container cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739: Status 404 returned error can't find the container with id cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739
	Oct 18 10:35:56 newest-cni-577403 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:35:56 newest-cni-577403 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:35:56 newest-cni-577403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-577403 -n newest-cni-577403
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-577403 -n newest-cni-577403: exit status 2 (431.522316ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-577403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-g5hjd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m4xxl kubernetes-dashboard-855c9754f9-wkntf
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-577403 describe pod coredns-66bc5c9577-g5hjd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m4xxl kubernetes-dashboard-855c9754f9-wkntf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-577403 describe pod coredns-66bc5c9577-g5hjd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m4xxl kubernetes-dashboard-855c9754f9-wkntf: exit status 1 (84.558444ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-g5hjd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-m4xxl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wkntf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-577403 describe pod coredns-66bc5c9577-g5hjd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m4xxl kubernetes-dashboard-855c9754f9-wkntf: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-577403
helpers_test.go:243: (dbg) docker inspect newest-cni-577403:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07",
	        "Created": "2025-10-18T10:34:57.122600154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495518,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:35:40.132542858Z",
	            "FinishedAt": "2025-10-18T10:35:39.238507058Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/hosts",
	        "LogPath": "/var/lib/docker/containers/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07-json.log",
	        "Name": "/newest-cni-577403",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-577403:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-577403",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07",
	                "LowerDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2be6c97e2d5e190ff5e8c5239916812c89591cd76f86c315857e04c4fbe56ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-577403",
	                "Source": "/var/lib/docker/volumes/newest-cni-577403/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-577403",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-577403",
	                "name.minikube.sigs.k8s.io": "newest-cni-577403",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "87320a71c9483a008f6dab65565a3aee10e7da8f0fc1e9aa9f5b4ecc201a6c26",
	            "SandboxKey": "/var/run/docker/netns/87320a71c948",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-577403": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:27:c0:76:98:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4944dc29f48a85d80603faba3e0eb9e1b1723b9d4244f496af940a2c5ae27592",
	                    "EndpointID": "f398099f91dd18a43b4f5278aba5bd74f0e8ac8e7a60cf0de4c87bb4e7564545",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-577403",
	                        "8f5c98145c70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-577403 -n newest-cni-577403
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-577403 -n newest-cni-577403: exit status 2 (347.543198ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-577403 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-577403 logs -n 25: (1.097287858s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-101897 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-715182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-101897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-715182 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-922359                                                                                                                                                                                                               │ disable-driver-mounts-922359 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ embed-certs-101897 image list --format=json                                                                                                                                                                                                   │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p embed-certs-101897 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-577403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ stop    │ -p newest-cni-577403 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-577403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ newest-cni-577403 image list --format=json                                                                                                                                                                                                    │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ pause   │ -p newest-cni-577403 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-027087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-027087 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:35:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:35:39.801164  495391 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:35:39.801334  495391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:35:39.801345  495391 out.go:374] Setting ErrFile to fd 2...
	I1018 10:35:39.801370  495391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:35:39.801669  495391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:35:39.802108  495391 out.go:368] Setting JSON to false
	I1018 10:35:39.803097  495391 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8290,"bootTime":1760775450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:35:39.803164  495391 start.go:141] virtualization:  
	I1018 10:35:39.806402  495391 out.go:179] * [newest-cni-577403] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:35:39.810328  495391 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:35:39.810427  495391 notify.go:220] Checking for updates...
	I1018 10:35:39.816271  495391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:35:39.819256  495391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:39.822114  495391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:35:39.825029  495391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:35:39.828055  495391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:35:39.831505  495391 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:39.832111  495391 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:35:39.863036  495391 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:35:39.863166  495391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:35:39.927173  495391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:35:39.917731447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:35:39.927286  495391 docker.go:318] overlay module found
	I1018 10:35:39.930366  495391 out.go:179] * Using the docker driver based on existing profile
	I1018 10:35:39.933123  495391 start.go:305] selected driver: docker
	I1018 10:35:39.933144  495391 start.go:925] validating driver "docker" against &{Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:39.933390  495391 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:35:39.934106  495391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:35:40.005381  495391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:35:39.995619077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:35:40.005732  495391 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 10:35:40.005761  495391 cni.go:84] Creating CNI manager for ""
	I1018 10:35:40.005822  495391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:35:40.005906  495391 start.go:349] cluster config:
	{Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:40.026453  495391 out.go:179] * Starting "newest-cni-577403" primary control-plane node in "newest-cni-577403" cluster
	I1018 10:35:40.033077  495391 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:35:40.041033  495391 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:35:40.053031  495391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:35:40.053164  495391 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:35:40.053229  495391 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:35:40.053243  495391 cache.go:58] Caching tarball of preloaded images
	I1018 10:35:40.053329  495391 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:35:40.053342  495391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:35:40.053461  495391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/config.json ...
	I1018 10:35:40.074618  495391 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:35:40.074639  495391 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:35:40.074659  495391 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:35:40.074684  495391 start.go:360] acquireMachinesLock for newest-cni-577403: {Name:mk1e4df99ad9f1535f8fd365f2c9b2df285e2ff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:35:40.074753  495391 start.go:364] duration metric: took 49.289µs to acquireMachinesLock for "newest-cni-577403"
	I1018 10:35:40.074783  495391 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:35:40.074789  495391 fix.go:54] fixHost starting: 
	I1018 10:35:40.075048  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:40.093765  495391 fix.go:112] recreateIfNeeded on newest-cni-577403: state=Stopped err=<nil>
	W1018 10:35:40.093803  495391 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 10:35:38.519228  487845 node_ready.go:57] node "no-preload-027087" has "Ready":"False" status (will retry)
	W1018 10:35:41.013770  487845 node_ready.go:57] node "no-preload-027087" has "Ready":"False" status (will retry)
	I1018 10:35:40.097109  495391 out.go:252] * Restarting existing docker container for "newest-cni-577403" ...
	I1018 10:35:40.097247  495391 cli_runner.go:164] Run: docker start newest-cni-577403
	I1018 10:35:40.367033  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:40.393168  495391 kic.go:430] container "newest-cni-577403" state is running.
	I1018 10:35:40.395377  495391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:40.416009  495391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/config.json ...
	I1018 10:35:40.416240  495391 machine.go:93] provisionDockerMachine start ...
	I1018 10:35:40.416320  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:40.437309  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:40.437870  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:40.437889  495391 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:35:40.438479  495391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45776->127.0.0.1:33459: read: connection reset by peer
	I1018 10:35:43.596981  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-577403
	
	I1018 10:35:43.597011  495391 ubuntu.go:182] provisioning hostname "newest-cni-577403"
	I1018 10:35:43.597080  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:43.615432  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:43.615750  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:43.615768  495391 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-577403 && echo "newest-cni-577403" | sudo tee /etc/hostname
	I1018 10:35:43.783111  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-577403
	
	I1018 10:35:43.783191  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:43.800707  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:43.801011  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:43.801033  495391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-577403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-577403/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-577403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:35:43.955118  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:35:43.955193  495391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:35:43.955246  495391 ubuntu.go:190] setting up certificates
	I1018 10:35:43.955278  495391 provision.go:84] configureAuth start
	I1018 10:35:43.955363  495391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:43.972794  495391 provision.go:143] copyHostCerts
	I1018 10:35:43.972869  495391 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:35:43.972939  495391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:35:43.973068  495391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:35:43.973176  495391 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:35:43.973210  495391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:35:43.973244  495391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:35:43.973382  495391 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:35:43.973391  495391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:35:43.973423  495391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:35:43.973513  495391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.newest-cni-577403 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-577403]
	I1018 10:35:44.275227  495391 provision.go:177] copyRemoteCerts
	I1018 10:35:44.275294  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:35:44.275338  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.300095  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:44.405483  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:35:44.424129  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 10:35:44.442030  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:35:44.459902  495391 provision.go:87] duration metric: took 504.571348ms to configureAuth
	I1018 10:35:44.459934  495391 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:35:44.460170  495391 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:44.460313  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.477530  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:44.477853  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:44.477879  495391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:35:44.768677  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:35:44.768701  495391 machine.go:96] duration metric: took 4.352443773s to provisionDockerMachine
	I1018 10:35:44.768711  495391 start.go:293] postStartSetup for "newest-cni-577403" (driver="docker")
	I1018 10:35:44.768722  495391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:35:44.768802  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:35:44.768842  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.788260  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	W1018 10:35:43.516141  487845 node_ready.go:57] node "no-preload-027087" has "Ready":"False" status (will retry)
	I1018 10:35:45.517013  487845 node_ready.go:49] node "no-preload-027087" is "Ready"
	I1018 10:35:45.517040  487845 node_ready.go:38] duration metric: took 15.507208383s for node "no-preload-027087" to be "Ready" ...
	I1018 10:35:45.517053  487845 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:35:45.517113  487845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:35:45.536180  487845 api_server.go:72] duration metric: took 17.469094556s to wait for apiserver process to appear ...
	I1018 10:35:45.536208  487845 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:35:45.536229  487845 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:35:45.550548  487845 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 10:35:45.551795  487845 api_server.go:141] control plane version: v1.34.1
	I1018 10:35:45.551819  487845 api_server.go:131] duration metric: took 15.604409ms to wait for apiserver health ...
	I1018 10:35:45.551830  487845 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:35:45.555402  487845 system_pods.go:59] 8 kube-system pods found
	I1018 10:35:45.555437  487845 system_pods.go:61] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:35:45.555444  487845 system_pods.go:61] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:45.555450  487845 system_pods.go:61] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:45.555454  487845 system_pods.go:61] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:45.555459  487845 system_pods.go:61] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:45.555464  487845 system_pods.go:61] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:45.555473  487845 system_pods.go:61] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:45.555480  487845 system_pods.go:61] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:35:45.555489  487845 system_pods.go:74] duration metric: took 3.652918ms to wait for pod list to return data ...
	I1018 10:35:45.555502  487845 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:35:45.559334  487845 default_sa.go:45] found service account: "default"
	I1018 10:35:45.559355  487845 default_sa.go:55] duration metric: took 3.846538ms for default service account to be created ...
	I1018 10:35:45.559365  487845 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:35:45.562454  487845 system_pods.go:86] 8 kube-system pods found
	I1018 10:35:45.562490  487845 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:35:45.562497  487845 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:45.562504  487845 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:45.562508  487845 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:45.562513  487845 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:45.562517  487845 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:45.562522  487845 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:45.562529  487845 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:35:45.562547  487845 retry.go:31] will retry after 246.903091ms: missing components: kube-dns
	I1018 10:35:45.834464  487845 system_pods.go:86] 8 kube-system pods found
	I1018 10:35:45.834504  487845 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:35:45.834511  487845 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:45.834517  487845 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:45.834522  487845 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:45.834526  487845 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:45.834530  487845 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:45.834533  487845 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:45.834542  487845 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:35:45.834557  487845 retry.go:31] will retry after 243.620287ms: missing components: kube-dns
	I1018 10:35:46.084007  487845 system_pods.go:86] 8 kube-system pods found
	I1018 10:35:46.084098  487845 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Running
	I1018 10:35:46.084121  487845 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:46.084143  487845 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:46.084164  487845 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:46.084185  487845 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:46.084204  487845 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:46.084224  487845 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:46.084244  487845 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Running
	I1018 10:35:46.084268  487845 system_pods.go:126] duration metric: took 524.896847ms to wait for k8s-apps to be running ...
	I1018 10:35:46.084289  487845 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:35:46.084365  487845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:35:46.100826  487845 system_svc.go:56] duration metric: took 16.526053ms WaitForService to wait for kubelet
	I1018 10:35:46.100852  487845 kubeadm.go:586] duration metric: took 18.033771789s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:35:46.100870  487845 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:35:46.104372  487845 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:35:46.104401  487845 node_conditions.go:123] node cpu capacity is 2
	I1018 10:35:46.104413  487845 node_conditions.go:105] duration metric: took 3.538407ms to run NodePressure ...
	I1018 10:35:46.104426  487845 start.go:241] waiting for startup goroutines ...
	I1018 10:35:46.104434  487845 start.go:246] waiting for cluster config update ...
	I1018 10:35:46.104446  487845 start.go:255] writing updated cluster config ...
	I1018 10:35:46.104763  487845 ssh_runner.go:195] Run: rm -f paused
	I1018 10:35:46.108852  487845 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:35:46.112794  487845 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.118588  487845 pod_ready.go:94] pod "coredns-66bc5c9577-wt4wd" is "Ready"
	I1018 10:35:46.118722  487845 pod_ready.go:86] duration metric: took 5.859296ms for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.121348  487845 pod_ready.go:83] waiting for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.130565  487845 pod_ready.go:94] pod "etcd-no-preload-027087" is "Ready"
	I1018 10:35:46.130641  487845 pod_ready.go:86] duration metric: took 9.222341ms for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.134263  487845 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.144400  487845 pod_ready.go:94] pod "kube-apiserver-no-preload-027087" is "Ready"
	I1018 10:35:46.144471  487845 pod_ready.go:86] duration metric: took 10.141532ms for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.147094  487845 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.519901  487845 pod_ready.go:94] pod "kube-controller-manager-no-preload-027087" is "Ready"
	I1018 10:35:46.519934  487845 pod_ready.go:86] duration metric: took 372.764875ms for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:44.897372  495391 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:35:44.901120  495391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:35:44.901148  495391 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:35:44.901159  495391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:35:44.901243  495391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:35:44.901323  495391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:35:44.901441  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:35:44.909035  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:35:44.930516  495391 start.go:296] duration metric: took 161.788667ms for postStartSetup
	I1018 10:35:44.930615  495391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:35:44.930669  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.948531  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:45.062980  495391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:35:45.075232  495391 fix.go:56] duration metric: took 5.000434531s for fixHost
	I1018 10:35:45.075257  495391 start.go:83] releasing machines lock for "newest-cni-577403", held for 5.000496094s
	I1018 10:35:45.075345  495391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:45.122222  495391 ssh_runner.go:195] Run: cat /version.json
	I1018 10:35:45.122300  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:45.133589  495391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:35:45.133670  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:45.178589  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:45.193667  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:45.377497  495391 ssh_runner.go:195] Run: systemctl --version
	I1018 10:35:45.492016  495391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:35:45.558708  495391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:35:45.565479  495391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:35:45.565553  495391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:35:45.579827  495391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:35:45.579851  495391 start.go:495] detecting cgroup driver to use...
	I1018 10:35:45.579883  495391 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:35:45.579941  495391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:35:45.599962  495391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:35:45.615542  495391 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:35:45.615607  495391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:35:45.631974  495391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:35:45.645768  495391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:35:45.850945  495391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:35:46.040544  495391 docker.go:234] disabling docker service ...
	I1018 10:35:46.040672  495391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:35:46.058366  495391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:35:46.072599  495391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:35:46.219042  495391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:35:46.339609  495391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:35:46.352898  495391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:35:46.367774  495391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:35:46.367924  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.376873  495391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:35:46.376976  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.385883  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.394716  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.410171  495391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:35:46.418435  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.427933  495391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.436348  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.445365  495391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:35:46.453322  495391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:35:46.460779  495391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:46.597365  495391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:35:46.749245  495391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:35:46.749368  495391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:35:46.753646  495391 start.go:563] Will wait 60s for crictl version
	I1018 10:35:46.753713  495391 ssh_runner.go:195] Run: which crictl
	I1018 10:35:46.757613  495391 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:35:46.783482  495391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:35:46.783572  495391 ssh_runner.go:195] Run: crio --version
	I1018 10:35:46.814334  495391 ssh_runner.go:195] Run: crio --version
	I1018 10:35:46.847943  495391 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:35:46.850766  495391 cli_runner.go:164] Run: docker network inspect newest-cni-577403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:35:46.867061  495391 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:35:46.870908  495391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:35:46.885297  495391 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 10:35:46.714713  487845 pod_ready.go:83] waiting for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.112811  487845 pod_ready.go:94] pod "kube-proxy-s87k4" is "Ready"
	I1018 10:35:47.112845  487845 pod_ready.go:86] duration metric: took 398.049543ms for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.313325  487845 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.713727  487845 pod_ready.go:94] pod "kube-scheduler-no-preload-027087" is "Ready"
	I1018 10:35:47.713752  487845 pod_ready.go:86] duration metric: took 400.404874ms for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.713763  487845 pod_ready.go:40] duration metric: took 1.604831389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:35:47.799613  487845 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:35:47.806148  487845 out.go:179] * Done! kubectl is now configured to use "no-preload-027087" cluster and "default" namespace by default
	I1018 10:35:46.888166  495391 kubeadm.go:883] updating cluster {Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:35:46.888290  495391 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:35:46.888365  495391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:35:46.930436  495391 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:35:46.930456  495391 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:35:46.930517  495391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:35:46.959600  495391 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:35:46.959678  495391 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:35:46.959700  495391 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:35:46.959834  495391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-577403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:35:46.959960  495391 ssh_runner.go:195] Run: crio config
	I1018 10:35:47.017741  495391 cni.go:84] Creating CNI manager for ""
	I1018 10:35:47.017759  495391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:35:47.017777  495391 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 10:35:47.017803  495391 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-577403 NodeName:newest-cni-577403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:35:47.017948  495391 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-577403"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:35:47.018023  495391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:35:47.027031  495391 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:35:47.027117  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:35:47.035837  495391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 10:35:47.049534  495391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:35:47.063007  495391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 10:35:47.076433  495391 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:35:47.080306  495391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:35:47.090375  495391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:47.204325  495391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:47.225369  495391 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403 for IP: 192.168.85.2
	I1018 10:35:47.225436  495391 certs.go:195] generating shared ca certs ...
	I1018 10:35:47.225467  495391 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:47.225631  495391 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:35:47.225720  495391 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:35:47.225752  495391 certs.go:257] generating profile certs ...
	I1018 10:35:47.225860  495391 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/client.key
	I1018 10:35:47.225960  495391 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key.da20550e
	I1018 10:35:47.226032  495391 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key
	I1018 10:35:47.226191  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:35:47.226258  495391 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:35:47.226290  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:35:47.226337  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:35:47.226389  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:35:47.226432  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:35:47.226504  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:35:47.227115  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:35:47.249725  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:35:47.270527  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:35:47.292026  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:35:47.315038  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 10:35:47.335134  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:35:47.354499  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:35:47.381269  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:35:47.402919  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:35:47.434336  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:35:47.453622  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:35:47.473935  495391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:35:47.487419  495391 ssh_runner.go:195] Run: openssl version
	I1018 10:35:47.500286  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:35:47.517526  495391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:35:47.522408  495391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:35:47.522527  495391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:35:47.568019  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:35:47.577990  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:35:47.586709  495391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:35:47.590959  495391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:35:47.591035  495391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:35:47.632731  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:35:47.641420  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:35:47.650459  495391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:47.654271  495391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:47.654339  495391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:47.696977  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:35:47.705648  495391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:35:47.709701  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:35:47.758867  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:35:47.806936  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:35:47.933964  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:35:48.116339  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:35:48.246261  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:35:48.327624  495391 kubeadm.go:400] StartCluster: {Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:48.327723  495391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:35:48.327796  495391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:35:48.380000  495391 cri.go:89] found id: "4931d62ebebc151d36b33ceac56370520ce022b159f398ba2c6d4d5335fe5cd5"
	I1018 10:35:48.380023  495391 cri.go:89] found id: "8598a86fdd0b5578be0124e533f2578cdbca59b60d2e2c51ec223a9bceea0ced"
	I1018 10:35:48.380029  495391 cri.go:89] found id: "b81cb4d2f278c266341a3cd9b07f6427e26118aa0c261292dea5cf46666371e8"
	I1018 10:35:48.380033  495391 cri.go:89] found id: "f87e3ee83d2a07038569e0e133062e319fd5545af2a5f970168374c1227e8428"
	I1018 10:35:48.380036  495391 cri.go:89] found id: ""
	I1018 10:35:48.380090  495391 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:35:48.396198  495391 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:35:48Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:35:48.396286  495391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:35:48.426352  495391 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:35:48.426372  495391 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:35:48.426444  495391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:35:48.442293  495391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:35:48.442875  495391 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-577403" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:48.443165  495391 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-293333/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-577403" cluster setting kubeconfig missing "newest-cni-577403" context setting]
	I1018 10:35:48.443641  495391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:48.446016  495391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:35:48.467094  495391 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 10:35:48.467128  495391 kubeadm.go:601] duration metric: took 40.749819ms to restartPrimaryControlPlane
	I1018 10:35:48.467138  495391 kubeadm.go:402] duration metric: took 139.524326ms to StartCluster
	I1018 10:35:48.467152  495391 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:48.467216  495391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:48.468223  495391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:48.468465  495391 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:35:48.468802  495391 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:35:48.468876  495391 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-577403"
	I1018 10:35:48.468892  495391 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-577403"
	W1018 10:35:48.468903  495391 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:35:48.468923  495391 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:48.469552  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.469959  495391 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:48.470017  495391 addons.go:69] Setting default-storageclass=true in profile "newest-cni-577403"
	I1018 10:35:48.470036  495391 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-577403"
	I1018 10:35:48.470298  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.472740  495391 addons.go:69] Setting dashboard=true in profile "newest-cni-577403"
	I1018 10:35:48.472770  495391 addons.go:238] Setting addon dashboard=true in "newest-cni-577403"
	W1018 10:35:48.472778  495391 addons.go:247] addon dashboard should already be in state true
	I1018 10:35:48.472812  495391 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:48.473342  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.473871  495391 out.go:179] * Verifying Kubernetes components...
	I1018 10:35:48.480751  495391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:48.536239  495391 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 10:35:48.536368  495391 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:35:48.539243  495391 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:48.539278  495391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:35:48.539248  495391 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 10:35:48.539348  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:48.541507  495391 addons.go:238] Setting addon default-storageclass=true in "newest-cni-577403"
	W1018 10:35:48.541533  495391 addons.go:247] addon default-storageclass should already be in state true
	I1018 10:35:48.541557  495391 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:48.541970  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.542272  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 10:35:48.542291  495391 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 10:35:48.542346  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:48.586169  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:48.606874  495391 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:48.606900  495391 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:35:48.606963  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:48.616866  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:48.642774  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:48.785117  495391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:48.820842  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 10:35:48.820864  495391 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 10:35:48.845950  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 10:35:48.845972  495391 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 10:35:48.869686  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 10:35:48.869707  495391 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 10:35:48.899843  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 10:35:48.899925  495391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 10:35:48.922442  495391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:48.951187  495391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:48.992259  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 10:35:48.992336  495391 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 10:35:49.090634  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 10:35:49.090658  495391 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 10:35:49.154528  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 10:35:49.154552  495391 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 10:35:49.196258  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 10:35:49.196290  495391 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 10:35:49.219095  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:35:49.219135  495391 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 10:35:49.245405  495391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:35:54.520126  495391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.734915278s)
	I1018 10:35:54.520198  495391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.597676781s)
	I1018 10:35:54.520518  495391 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.569265757s)
	I1018 10:35:54.520558  495391 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:35:54.520617  495391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:35:54.520760  495391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.2752767s)
	I1018 10:35:54.523991  495391 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-577403 addons enable metrics-server
	
	I1018 10:35:54.544161  495391 api_server.go:72] duration metric: took 6.07565302s to wait for apiserver process to appear ...
	I1018 10:35:54.544183  495391 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:35:54.544201  495391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:35:54.557901  495391 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 10:35:54.557975  495391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 10:35:54.567339  495391 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 10:35:54.570380  495391 addons.go:514] duration metric: took 6.101573061s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 10:35:55.045029  495391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:35:55.053858  495391 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 10:35:55.055336  495391 api_server.go:141] control plane version: v1.34.1
	I1018 10:35:55.055366  495391 api_server.go:131] duration metric: took 511.176614ms to wait for apiserver health ...
	I1018 10:35:55.055377  495391 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:35:55.060646  495391 system_pods.go:59] 8 kube-system pods found
	I1018 10:35:55.060696  495391 system_pods.go:61] "coredns-66bc5c9577-g5hjd" [d8506151-9057-4d64-9951-94bfc8e48157] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:35:55.060706  495391 system_pods.go:61] "etcd-newest-cni-577403" [9061973a-4cc4-4701-ac68-b463a5c36efe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:35:55.060711  495391 system_pods.go:61] "kindnet-dc6mn" [59b45574-ece2-4376-aacf-8e87cb8f03e7] Running
	I1018 10:35:55.060719  495391 system_pods.go:61] "kube-apiserver-newest-cni-577403" [bfab2b0b-ff85-4eb8-8e64-157577d51881] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:35:55.060730  495391 system_pods.go:61] "kube-controller-manager-newest-cni-577403" [0ffcd3ef-9adb-437c-9c04-32638238a83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:35:55.060738  495391 system_pods.go:61] "kube-proxy-4twn2" [060f019f-35b3-47a0-af70-f480829d1715] Running
	I1018 10:35:55.060744  495391 system_pods.go:61] "kube-scheduler-newest-cni-577403" [6c0fc2df-7ebe-4634-828f-7febca31dffc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:35:55.060767  495391 system_pods.go:61] "storage-provisioner" [2f727e8b-afd6-4e3e-96f3-a9d649d239ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:35:55.060774  495391 system_pods.go:74] duration metric: took 5.391812ms to wait for pod list to return data ...
	I1018 10:35:55.060789  495391 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:35:55.063384  495391 default_sa.go:45] found service account: "default"
	I1018 10:35:55.063414  495391 default_sa.go:55] duration metric: took 2.617329ms for default service account to be created ...
	I1018 10:35:55.063436  495391 kubeadm.go:586] duration metric: took 6.594929328s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 10:35:55.063456  495391 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:35:55.068366  495391 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:35:55.068419  495391 node_conditions.go:123] node cpu capacity is 2
	I1018 10:35:55.068499  495391 node_conditions.go:105] duration metric: took 4.971713ms to run NodePressure ...
	I1018 10:35:55.068518  495391 start.go:241] waiting for startup goroutines ...
	I1018 10:35:55.068526  495391 start.go:246] waiting for cluster config update ...
	I1018 10:35:55.068541  495391 start.go:255] writing updated cluster config ...
	I1018 10:35:55.068886  495391 ssh_runner.go:195] Run: rm -f paused
	I1018 10:35:55.161466  495391 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:35:55.164855  495391 out.go:179] * Done! kubectl is now configured to use "newest-cni-577403" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.722269945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.72857641Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-4twn2/POD" id=89fbde2c-f237-4495-b75c-f489aebe006c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.728658421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.738181436Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=937469e2-fa77-4b45-8528-97c197e2783e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.738660514Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=89fbde2c-f237-4495-b75c-f489aebe006c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.748957495Z" level=info msg="Ran pod sandbox cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739 with infra container: kube-system/kube-proxy-4twn2/POD" id=89fbde2c-f237-4495-b75c-f489aebe006c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.753073796Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d0583c7f-c513-4f0a-bdec-996f9535630f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.754483939Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bd88f553-82da-4fbf-86b6-60c61384297d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.756177211Z" level=info msg="Creating container: kube-system/kube-proxy-4twn2/kube-proxy" id=8d7d105e-7bb5-42af-a59e-03527a87b07f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.756636835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.771221795Z" level=info msg="Ran pod sandbox 8406f7ff2c8baf6de3d6f07b4655d6c5fff6fbac43b6bbd56ca319d362e9c840 with infra container: kube-system/kindnet-dc6mn/POD" id=937469e2-fa77-4b45-8528-97c197e2783e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.77847681Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=833b4518-46ec-4907-ab1e-8d3ed99db56d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.780127652Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5c712c8b-dd50-4022-9676-f260163e8038 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.782036911Z" level=info msg="Creating container: kube-system/kindnet-dc6mn/kindnet-cni" id=218bdadc-309f-4a15-a3cc-442fb21a9591 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.782778976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.802625744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.804047792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.807146259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.812610072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.844227948Z" level=info msg="Created container 143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3: kube-system/kindnet-dc6mn/kindnet-cni" id=218bdadc-309f-4a15-a3cc-442fb21a9591 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.845171017Z" level=info msg="Starting container: 143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3" id=87806a7e-5dc4-427f-a573-dd409fe0d1eb name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.850961669Z" level=info msg="Started container" PID=1056 containerID=143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3 description=kube-system/kindnet-dc6mn/kindnet-cni id=87806a7e-5dc4-427f-a573-dd409fe0d1eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=8406f7ff2c8baf6de3d6f07b4655d6c5fff6fbac43b6bbd56ca319d362e9c840
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.853545815Z" level=info msg="Created container 819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301: kube-system/kube-proxy-4twn2/kube-proxy" id=8d7d105e-7bb5-42af-a59e-03527a87b07f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.856339713Z" level=info msg="Starting container: 819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301" id=e1338dbd-9524-4d7e-b743-2d212a8254ae name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:35:53 newest-cni-577403 crio[610]: time="2025-10-18T10:35:53.869702766Z" level=info msg="Started container" PID=1057 containerID=819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301 description=kube-system/kube-proxy-4twn2/kube-proxy id=e1338dbd-9524-4d7e-b743-2d212a8254ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	143638a5b7bf8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   8406f7ff2c8ba       kindnet-dc6mn                               kube-system
	819fe87a1d42b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   cd5598ce8b18d       kube-proxy-4twn2                            kube-system
	4931d62ebebc1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   52453295cd691       kube-apiserver-newest-cni-577403            kube-system
	8598a86fdd0b5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   899be4382bb59       etcd-newest-cni-577403                      kube-system
	b81cb4d2f278c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   3c895e7c8388d       kube-scheduler-newest-cni-577403            kube-system
	f87e3ee83d2a0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   06713086276c6       kube-controller-manager-newest-cni-577403   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-577403
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-577403
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=newest-cni-577403
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_35_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:35:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-577403
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:35:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:35:53 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:35:53 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:35:53 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 10:35:53 +0000   Sat, 18 Oct 2025 10:35:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-577403
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                094f3965-d36a-4b5c-959d-94a9f33348db
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-577403                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-dc6mn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-577403             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-577403    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-4twn2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-577403             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Warning  CgroupV1                 43s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-577403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-577403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-577403 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-577403 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-577403 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-577403 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-577403 event: Registered Node newest-cni-577403 in Controller
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-577403 event: Registered Node newest-cni-577403 in Controller
	
	
	==> dmesg <==
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8598a86fdd0b5578be0124e533f2578cdbca59b60d2e2c51ec223a9bceea0ced] <==
	{"level":"warn","ts":"2025-10-18T10:35:51.856810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.876082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.899126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.913164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.930064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.945491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.961970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.978643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:51.994264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.020741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.035229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.047355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.063721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.080945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.096665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.114493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.132463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.146257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.165938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.182427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.201460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.227832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.243255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.263078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:52.330616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:02 up  2:18,  0 user,  load average: 3.93, 4.22, 3.37
	Linux newest-cni-577403 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [143638a5b7bf803e029dbffcae584d824d7c7a881004b359c01169857f62bcd3] <==
	I1018 10:35:54.013066       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:35:54.013373       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 10:35:54.013514       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:35:54.013528       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:35:54.013544       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:35:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:35:54.309276       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:35:54.309320       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:35:54.309334       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:35:54.309812       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [4931d62ebebc151d36b33ceac56370520ce022b159f398ba2c6d4d5335fe5cd5] <==
	I1018 10:35:53.240148       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 10:35:53.240201       1 policy_source.go:240] refreshing policies
	I1018 10:35:53.249266       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 10:35:53.275520       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:35:53.314458       1 aggregator.go:171] initial CRD sync complete...
	I1018 10:35:53.314486       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 10:35:53.314495       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 10:35:53.314501       1 cache.go:39] Caches are synced for autoregister controller
	I1018 10:35:53.381316       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:35:53.383095       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 10:35:53.383395       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 10:35:53.383518       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 10:35:53.388954       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:35:53.570426       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:35:53.934683       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:35:54.042176       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:35:54.132220       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:35:54.253950       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:35:54.292968       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:35:54.409615       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.128.13"}
	I1018 10:35:54.433666       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.144.59"}
	I1018 10:35:56.659855       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:35:56.702187       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:35:56.845725       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:35:57.095976       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [f87e3ee83d2a07038569e0e133062e319fd5545af2a5f970168374c1227e8428] <==
	I1018 10:35:56.590744       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 10:35:56.591192       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 10:35:56.591775       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 10:35:56.591923       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:35:56.592104       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 10:35:56.601680       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:35:56.603362       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 10:35:56.603413       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:35:56.606080       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 10:35:56.607310       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 10:35:56.607695       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 10:35:56.607756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 10:35:56.618879       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:35:56.618906       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:35:56.618914       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:35:56.634740       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 10:35:56.635895       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 10:35:56.635951       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 10:35:56.636915       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 10:35:56.636961       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 10:35:56.638955       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:35:56.639020       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 10:35:56.639074       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:35:56.652129       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:35:56.653647       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [819fe87a1d42b01fc86148fa045944c436638f510eb5f3bd9020c228e244a301] <==
	I1018 10:35:54.079392       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:35:54.363016       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:35:54.463866       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:35:54.463911       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 10:35:54.463988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:35:54.585154       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:35:54.585363       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:35:54.601782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:35:54.602138       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:35:54.602183       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:35:54.604446       1 config.go:200] "Starting service config controller"
	I1018 10:35:54.604520       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:35:54.604566       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:35:54.604596       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:35:54.604637       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:35:54.604665       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:35:54.605399       1 config.go:309] "Starting node config controller"
	I1018 10:35:54.605454       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:35:54.605484       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:35:54.704693       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 10:35:54.704789       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:35:54.704699       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b81cb4d2f278c266341a3cd9b07f6427e26118aa0c261292dea5cf46666371e8] <==
	I1018 10:35:50.287126       1 serving.go:386] Generated self-signed cert in-memory
	W1018 10:35:53.125930       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 10:35:53.125974       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 10:35:53.125985       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 10:35:53.125992       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 10:35:53.295966       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:35:53.295997       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:35:53.328059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:35:53.332002       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:35:53.335661       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:35:53.332031       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:35:53.437068       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:35:51 newest-cni-577403 kubelet[727]: E1018 10:35:51.306406     727 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-577403\" not found" node="newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.062714     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: E1018 10:35:53.330935     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-577403\" already exists" pod="kube-system/etcd-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.330975     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.341695     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.341802     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.341846     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.349340     727 apiserver.go:52] "Watching apiserver"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.349699     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: E1018 10:35:53.435187     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-577403\" already exists" pod="kube-system/kube-apiserver-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.441459     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.468172     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: E1018 10:35:53.476054     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-577403\" already exists" pod="kube-system/kube-controller-manager-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.476329     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: E1018 10:35:53.494803     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-577403\" already exists" pod="kube-system/kube-scheduler-newest-cni-577403"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551200     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/060f019f-35b3-47a0-af70-f480829d1715-xtables-lock\") pod \"kube-proxy-4twn2\" (UID: \"060f019f-35b3-47a0-af70-f480829d1715\") " pod="kube-system/kube-proxy-4twn2"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551314     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59b45574-ece2-4376-aacf-8e87cb8f03e7-cni-cfg\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551338     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59b45574-ece2-4376-aacf-8e87cb8f03e7-xtables-lock\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551355     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59b45574-ece2-4376-aacf-8e87cb8f03e7-lib-modules\") pod \"kindnet-dc6mn\" (UID: \"59b45574-ece2-4376-aacf-8e87cb8f03e7\") " pod="kube-system/kindnet-dc6mn"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.551382     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/060f019f-35b3-47a0-af70-f480829d1715-lib-modules\") pod \"kube-proxy-4twn2\" (UID: \"060f019f-35b3-47a0-af70-f480829d1715\") " pod="kube-system/kube-proxy-4twn2"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: I1018 10:35:53.587298     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 10:35:53 newest-cni-577403 kubelet[727]: W1018 10:35:53.746049     727 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f5c98145c704405d054a99fb1b9f8a4c6c9f65bcae4a8d880cc8e6e2ead7b07/crio-cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739 WatchSource:0}: Error finding container cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739: Status 404 returned error can't find the container with id cd5598ce8b18d7fc270737550824f81afecf7a31744587cf2f90c29886850739
	Oct 18 10:35:56 newest-cni-577403 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:35:56 newest-cni-577403 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:35:56 newest-cni-577403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-577403 -n newest-cni-577403
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-577403 -n newest-cni-577403: exit status 2 (381.454453ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-577403 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-g5hjd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m4xxl kubernetes-dashboard-855c9754f9-wkntf
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-577403 describe pod coredns-66bc5c9577-g5hjd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m4xxl kubernetes-dashboard-855c9754f9-wkntf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-577403 describe pod coredns-66bc5c9577-g5hjd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m4xxl kubernetes-dashboard-855c9754f9-wkntf: exit status 1 (86.075929ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-g5hjd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-m4xxl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wkntf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-577403 describe pod coredns-66bc5c9577-g5hjd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-m4xxl kubernetes-dashboard-855c9754f9-wkntf: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-027087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-027087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (371.342482ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:35:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-027087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-027087 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-027087 describe deploy/metrics-server -n kube-system: exit status 1 (110.068934ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-027087 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-027087
helpers_test.go:243: (dbg) docker inspect no-preload-027087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75",
	        "Created": "2025-10-18T10:34:32.909990218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488149,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:34:32.975867802Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/hostname",
	        "HostsPath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/hosts",
	        "LogPath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75-json.log",
	        "Name": "/no-preload-027087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-027087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-027087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75",
	                "LowerDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-027087",
	                "Source": "/var/lib/docker/volumes/no-preload-027087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-027087",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-027087",
	                "name.minikube.sigs.k8s.io": "no-preload-027087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "83a243a6694abe84c05e1fb479281b4165e6aa5bbaa5ec719c5a5cab9944a4d0",
	            "SandboxKey": "/var/run/docker/netns/83a243a6694a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-027087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:f5:eb:c4:84:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a54e6a9010d18200cc9cc9a9c81fbb30eaec85d99c1ec1614afefa1f14d2cb",
	                    "EndpointID": "b9ea772dc4aff2ab20205cb67c80789dbab50cc48ca22c3a762b2922bc969245",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-027087",
	                        "f282a9c13400"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027087 -n no-preload-027087
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-027087 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-027087 logs -n 25: (1.426254213s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p default-k8s-diff-port-715182 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-101897 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-101897 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-715182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-101897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:33 UTC │
	│ start   │ -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:33 UTC │ 18 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-715182 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-922359                                                                                                                                                                                                               │ disable-driver-mounts-922359 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ embed-certs-101897 image list --format=json                                                                                                                                                                                                   │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p embed-certs-101897 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-577403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ stop    │ -p newest-cni-577403 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-577403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ newest-cni-577403 image list --format=json                                                                                                                                                                                                    │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ pause   │ -p newest-cni-577403 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-027087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:35:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:35:39.801164  495391 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:35:39.801334  495391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:35:39.801345  495391 out.go:374] Setting ErrFile to fd 2...
	I1018 10:35:39.801370  495391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:35:39.801669  495391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:35:39.802108  495391 out.go:368] Setting JSON to false
	I1018 10:35:39.803097  495391 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8290,"bootTime":1760775450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:35:39.803164  495391 start.go:141] virtualization:  
	I1018 10:35:39.806402  495391 out.go:179] * [newest-cni-577403] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:35:39.810328  495391 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:35:39.810427  495391 notify.go:220] Checking for updates...
	I1018 10:35:39.816271  495391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:35:39.819256  495391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:39.822114  495391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:35:39.825029  495391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:35:39.828055  495391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:35:39.831505  495391 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:39.832111  495391 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:35:39.863036  495391 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:35:39.863166  495391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:35:39.927173  495391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:35:39.917731447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:35:39.927286  495391 docker.go:318] overlay module found
	I1018 10:35:39.930366  495391 out.go:179] * Using the docker driver based on existing profile
	I1018 10:35:39.933123  495391 start.go:305] selected driver: docker
	I1018 10:35:39.933144  495391 start.go:925] validating driver "docker" against &{Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:39.933390  495391 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:35:39.934106  495391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:35:40.005381  495391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:35:39.995619077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:35:40.005732  495391 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 10:35:40.005761  495391 cni.go:84] Creating CNI manager for ""
	I1018 10:35:40.005822  495391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:35:40.005906  495391 start.go:349] cluster config:
	{Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:40.026453  495391 out.go:179] * Starting "newest-cni-577403" primary control-plane node in "newest-cni-577403" cluster
	I1018 10:35:40.033077  495391 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:35:40.041033  495391 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:35:40.053031  495391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:35:40.053164  495391 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:35:40.053229  495391 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 10:35:40.053243  495391 cache.go:58] Caching tarball of preloaded images
	I1018 10:35:40.053329  495391 preload.go:233] Found /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 10:35:40.053342  495391 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 10:35:40.053461  495391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/config.json ...
	I1018 10:35:40.074618  495391 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:35:40.074639  495391 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:35:40.074659  495391 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:35:40.074684  495391 start.go:360] acquireMachinesLock for newest-cni-577403: {Name:mk1e4df99ad9f1535f8fd365f2c9b2df285e2ff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:35:40.074753  495391 start.go:364] duration metric: took 49.289µs to acquireMachinesLock for "newest-cni-577403"
	I1018 10:35:40.074783  495391 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:35:40.074789  495391 fix.go:54] fixHost starting: 
	I1018 10:35:40.075048  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:40.093765  495391 fix.go:112] recreateIfNeeded on newest-cni-577403: state=Stopped err=<nil>
	W1018 10:35:40.093803  495391 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 10:35:38.519228  487845 node_ready.go:57] node "no-preload-027087" has "Ready":"False" status (will retry)
	W1018 10:35:41.013770  487845 node_ready.go:57] node "no-preload-027087" has "Ready":"False" status (will retry)
	I1018 10:35:40.097109  495391 out.go:252] * Restarting existing docker container for "newest-cni-577403" ...
	I1018 10:35:40.097247  495391 cli_runner.go:164] Run: docker start newest-cni-577403
	I1018 10:35:40.367033  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:40.393168  495391 kic.go:430] container "newest-cni-577403" state is running.
	I1018 10:35:40.395377  495391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:40.416009  495391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/config.json ...
	I1018 10:35:40.416240  495391 machine.go:93] provisionDockerMachine start ...
	I1018 10:35:40.416320  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:40.437309  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:40.437870  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:40.437889  495391 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:35:40.438479  495391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45776->127.0.0.1:33459: read: connection reset by peer
	I1018 10:35:43.596981  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-577403
	
	I1018 10:35:43.597011  495391 ubuntu.go:182] provisioning hostname "newest-cni-577403"
	I1018 10:35:43.597080  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:43.615432  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:43.615750  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:43.615768  495391 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-577403 && echo "newest-cni-577403" | sudo tee /etc/hostname
	I1018 10:35:43.783111  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-577403
	
	I1018 10:35:43.783191  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:43.800707  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:43.801011  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:43.801033  495391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-577403' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-577403/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-577403' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:35:43.955118  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:35:43.955193  495391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:35:43.955246  495391 ubuntu.go:190] setting up certificates
	I1018 10:35:43.955278  495391 provision.go:84] configureAuth start
	I1018 10:35:43.955363  495391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:43.972794  495391 provision.go:143] copyHostCerts
	I1018 10:35:43.972869  495391 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:35:43.972939  495391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:35:43.973068  495391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:35:43.973176  495391 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:35:43.973210  495391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:35:43.973244  495391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:35:43.973382  495391 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:35:43.973391  495391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:35:43.973423  495391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:35:43.973513  495391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.newest-cni-577403 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-577403]
	I1018 10:35:44.275227  495391 provision.go:177] copyRemoteCerts
	I1018 10:35:44.275294  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:35:44.275338  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.300095  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:44.405483  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:35:44.424129  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 10:35:44.442030  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:35:44.459902  495391 provision.go:87] duration metric: took 504.571348ms to configureAuth
	I1018 10:35:44.459934  495391 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:35:44.460170  495391 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:44.460313  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.477530  495391 main.go:141] libmachine: Using SSH client type: native
	I1018 10:35:44.477853  495391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1018 10:35:44.477879  495391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:35:44.768677  495391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:35:44.768701  495391 machine.go:96] duration metric: took 4.352443773s to provisionDockerMachine
	I1018 10:35:44.768711  495391 start.go:293] postStartSetup for "newest-cni-577403" (driver="docker")
	I1018 10:35:44.768722  495391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:35:44.768802  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:35:44.768842  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.788260  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	W1018 10:35:43.516141  487845 node_ready.go:57] node "no-preload-027087" has "Ready":"False" status (will retry)
	I1018 10:35:45.517013  487845 node_ready.go:49] node "no-preload-027087" is "Ready"
	I1018 10:35:45.517040  487845 node_ready.go:38] duration metric: took 15.507208383s for node "no-preload-027087" to be "Ready" ...
	I1018 10:35:45.517053  487845 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:35:45.517113  487845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:35:45.536180  487845 api_server.go:72] duration metric: took 17.469094556s to wait for apiserver process to appear ...
	I1018 10:35:45.536208  487845 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:35:45.536229  487845 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:35:45.550548  487845 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 10:35:45.551795  487845 api_server.go:141] control plane version: v1.34.1
	I1018 10:35:45.551819  487845 api_server.go:131] duration metric: took 15.604409ms to wait for apiserver health ...
	I1018 10:35:45.551830  487845 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:35:45.555402  487845 system_pods.go:59] 8 kube-system pods found
	I1018 10:35:45.555437  487845 system_pods.go:61] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:35:45.555444  487845 system_pods.go:61] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:45.555450  487845 system_pods.go:61] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:45.555454  487845 system_pods.go:61] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:45.555459  487845 system_pods.go:61] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:45.555464  487845 system_pods.go:61] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:45.555473  487845 system_pods.go:61] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:45.555480  487845 system_pods.go:61] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:35:45.555489  487845 system_pods.go:74] duration metric: took 3.652918ms to wait for pod list to return data ...
	I1018 10:35:45.555502  487845 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:35:45.559334  487845 default_sa.go:45] found service account: "default"
	I1018 10:35:45.559355  487845 default_sa.go:55] duration metric: took 3.846538ms for default service account to be created ...
	I1018 10:35:45.559365  487845 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:35:45.562454  487845 system_pods.go:86] 8 kube-system pods found
	I1018 10:35:45.562490  487845 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:35:45.562497  487845 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:45.562504  487845 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:45.562508  487845 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:45.562513  487845 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:45.562517  487845 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:45.562522  487845 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:45.562529  487845 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:35:45.562547  487845 retry.go:31] will retry after 246.903091ms: missing components: kube-dns
	I1018 10:35:45.834464  487845 system_pods.go:86] 8 kube-system pods found
	I1018 10:35:45.834504  487845 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:35:45.834511  487845 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:45.834517  487845 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:45.834522  487845 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:45.834526  487845 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:45.834530  487845 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:45.834533  487845 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:45.834542  487845 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 10:35:45.834557  487845 retry.go:31] will retry after 243.620287ms: missing components: kube-dns
	I1018 10:35:46.084007  487845 system_pods.go:86] 8 kube-system pods found
	I1018 10:35:46.084098  487845 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Running
	I1018 10:35:46.084121  487845 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running
	I1018 10:35:46.084143  487845 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:35:46.084164  487845 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running
	I1018 10:35:46.084185  487845 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running
	I1018 10:35:46.084204  487845 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:35:46.084224  487845 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running
	I1018 10:35:46.084244  487845 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Running
	I1018 10:35:46.084268  487845 system_pods.go:126] duration metric: took 524.896847ms to wait for k8s-apps to be running ...
	I1018 10:35:46.084289  487845 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:35:46.084365  487845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:35:46.100826  487845 system_svc.go:56] duration metric: took 16.526053ms WaitForService to wait for kubelet
	I1018 10:35:46.100852  487845 kubeadm.go:586] duration metric: took 18.033771789s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:35:46.100870  487845 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:35:46.104372  487845 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:35:46.104401  487845 node_conditions.go:123] node cpu capacity is 2
	I1018 10:35:46.104413  487845 node_conditions.go:105] duration metric: took 3.538407ms to run NodePressure ...
	I1018 10:35:46.104426  487845 start.go:241] waiting for startup goroutines ...
	I1018 10:35:46.104434  487845 start.go:246] waiting for cluster config update ...
	I1018 10:35:46.104446  487845 start.go:255] writing updated cluster config ...
	I1018 10:35:46.104763  487845 ssh_runner.go:195] Run: rm -f paused
	I1018 10:35:46.108852  487845 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:35:46.112794  487845 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.118588  487845 pod_ready.go:94] pod "coredns-66bc5c9577-wt4wd" is "Ready"
	I1018 10:35:46.118722  487845 pod_ready.go:86] duration metric: took 5.859296ms for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.121348  487845 pod_ready.go:83] waiting for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.130565  487845 pod_ready.go:94] pod "etcd-no-preload-027087" is "Ready"
	I1018 10:35:46.130641  487845 pod_ready.go:86] duration metric: took 9.222341ms for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.134263  487845 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.144400  487845 pod_ready.go:94] pod "kube-apiserver-no-preload-027087" is "Ready"
	I1018 10:35:46.144471  487845 pod_ready.go:86] duration metric: took 10.141532ms for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.147094  487845 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:46.519901  487845 pod_ready.go:94] pod "kube-controller-manager-no-preload-027087" is "Ready"
	I1018 10:35:46.519934  487845 pod_ready.go:86] duration metric: took 372.764875ms for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:44.897372  495391 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:35:44.901120  495391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:35:44.901148  495391 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:35:44.901159  495391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:35:44.901243  495391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:35:44.901323  495391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:35:44.901441  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:35:44.909035  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:35:44.930516  495391 start.go:296] duration metric: took 161.788667ms for postStartSetup
	I1018 10:35:44.930615  495391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:35:44.930669  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:44.948531  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:45.062980  495391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:35:45.075232  495391 fix.go:56] duration metric: took 5.000434531s for fixHost
	I1018 10:35:45.075257  495391 start.go:83] releasing machines lock for "newest-cni-577403", held for 5.000496094s
	I1018 10:35:45.075345  495391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-577403
	I1018 10:35:45.122222  495391 ssh_runner.go:195] Run: cat /version.json
	I1018 10:35:45.122300  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:45.133589  495391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:35:45.133670  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:45.178589  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:45.193667  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:45.377497  495391 ssh_runner.go:195] Run: systemctl --version
	I1018 10:35:45.492016  495391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:35:45.558708  495391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:35:45.565479  495391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:35:45.565553  495391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:35:45.579827  495391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:35:45.579851  495391 start.go:495] detecting cgroup driver to use...
	I1018 10:35:45.579883  495391 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:35:45.579941  495391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:35:45.599962  495391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:35:45.615542  495391 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:35:45.615607  495391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:35:45.631974  495391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:35:45.645768  495391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:35:45.850945  495391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:35:46.040544  495391 docker.go:234] disabling docker service ...
	I1018 10:35:46.040672  495391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:35:46.058366  495391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:35:46.072599  495391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:35:46.219042  495391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:35:46.339609  495391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:35:46.352898  495391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:35:46.367774  495391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:35:46.367924  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.376873  495391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:35:46.376976  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.385883  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.394716  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.410171  495391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:35:46.418435  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.427933  495391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.436348  495391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:35:46.445365  495391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:35:46.453322  495391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:35:46.460779  495391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:46.597365  495391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:35:46.749245  495391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:35:46.749368  495391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:35:46.753646  495391 start.go:563] Will wait 60s for crictl version
	I1018 10:35:46.753713  495391 ssh_runner.go:195] Run: which crictl
	I1018 10:35:46.757613  495391 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:35:46.783482  495391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:35:46.783572  495391 ssh_runner.go:195] Run: crio --version
	I1018 10:35:46.814334  495391 ssh_runner.go:195] Run: crio --version
	I1018 10:35:46.847943  495391 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:35:46.850766  495391 cli_runner.go:164] Run: docker network inspect newest-cni-577403 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:35:46.867061  495391 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:35:46.870908  495391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:35:46.885297  495391 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 10:35:46.714713  487845 pod_ready.go:83] waiting for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.112811  487845 pod_ready.go:94] pod "kube-proxy-s87k4" is "Ready"
	I1018 10:35:47.112845  487845 pod_ready.go:86] duration metric: took 398.049543ms for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.313325  487845 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.713727  487845 pod_ready.go:94] pod "kube-scheduler-no-preload-027087" is "Ready"
	I1018 10:35:47.713752  487845 pod_ready.go:86] duration metric: took 400.404874ms for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:35:47.713763  487845 pod_ready.go:40] duration metric: took 1.604831389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:35:47.799613  487845 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:35:47.806148  487845 out.go:179] * Done! kubectl is now configured to use "no-preload-027087" cluster and "default" namespace by default
	I1018 10:35:46.888166  495391 kubeadm.go:883] updating cluster {Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:35:46.888290  495391 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:35:46.888365  495391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:35:46.930436  495391 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:35:46.930456  495391 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:35:46.930517  495391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:35:46.959600  495391 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:35:46.959678  495391 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:35:46.959700  495391 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:35:46.959834  495391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-577403 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:35:46.959960  495391 ssh_runner.go:195] Run: crio config
	I1018 10:35:47.017741  495391 cni.go:84] Creating CNI manager for ""
	I1018 10:35:47.017759  495391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:35:47.017777  495391 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 10:35:47.017803  495391 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-577403 NodeName:newest-cni-577403 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:35:47.017948  495391 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-577403"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:35:47.018023  495391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:35:47.027031  495391 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:35:47.027117  495391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:35:47.035837  495391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 10:35:47.049534  495391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:35:47.063007  495391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 10:35:47.076433  495391 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:35:47.080306  495391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:35:47.090375  495391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:47.204325  495391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:47.225369  495391 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403 for IP: 192.168.85.2
	I1018 10:35:47.225436  495391 certs.go:195] generating shared ca certs ...
	I1018 10:35:47.225467  495391 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:47.225631  495391 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:35:47.225720  495391 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:35:47.225752  495391 certs.go:257] generating profile certs ...
	I1018 10:35:47.225860  495391 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/client.key
	I1018 10:35:47.225960  495391 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key.da20550e
	I1018 10:35:47.226032  495391 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key
	I1018 10:35:47.226191  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:35:47.226258  495391 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:35:47.226290  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:35:47.226337  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:35:47.226389  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:35:47.226432  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:35:47.226504  495391 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:35:47.227115  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:35:47.249725  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:35:47.270527  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:35:47.292026  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:35:47.315038  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 10:35:47.335134  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:35:47.354499  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:35:47.381269  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/newest-cni-577403/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:35:47.402919  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:35:47.434336  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:35:47.453622  495391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:35:47.473935  495391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:35:47.487419  495391 ssh_runner.go:195] Run: openssl version
	I1018 10:35:47.500286  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:35:47.517526  495391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:35:47.522408  495391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:35:47.522527  495391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:35:47.568019  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:35:47.577990  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:35:47.586709  495391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:35:47.590959  495391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:35:47.591035  495391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:35:47.632731  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:35:47.641420  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:35:47.650459  495391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:47.654271  495391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:47.654339  495391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:35:47.696977  495391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:35:47.705648  495391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:35:47.709701  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:35:47.758867  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:35:47.806936  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:35:47.933964  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:35:48.116339  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:35:48.246261  495391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:35:48.327624  495391 kubeadm.go:400] StartCluster: {Name:newest-cni-577403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-577403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:35:48.327723  495391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:35:48.327796  495391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:35:48.380000  495391 cri.go:89] found id: "4931d62ebebc151d36b33ceac56370520ce022b159f398ba2c6d4d5335fe5cd5"
	I1018 10:35:48.380023  495391 cri.go:89] found id: "8598a86fdd0b5578be0124e533f2578cdbca59b60d2e2c51ec223a9bceea0ced"
	I1018 10:35:48.380029  495391 cri.go:89] found id: "b81cb4d2f278c266341a3cd9b07f6427e26118aa0c261292dea5cf46666371e8"
	I1018 10:35:48.380033  495391 cri.go:89] found id: "f87e3ee83d2a07038569e0e133062e319fd5545af2a5f970168374c1227e8428"
	I1018 10:35:48.380036  495391 cri.go:89] found id: ""
	I1018 10:35:48.380090  495391 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:35:48.396198  495391 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:35:48Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:35:48.396286  495391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:35:48.426352  495391 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:35:48.426372  495391 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:35:48.426444  495391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:35:48.442293  495391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:35:48.442875  495391 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-577403" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:48.443165  495391 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-293333/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-577403" cluster setting kubeconfig missing "newest-cni-577403" context setting]
	I1018 10:35:48.443641  495391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:48.446016  495391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:35:48.467094  495391 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 10:35:48.467128  495391 kubeadm.go:601] duration metric: took 40.749819ms to restartPrimaryControlPlane
	I1018 10:35:48.467138  495391 kubeadm.go:402] duration metric: took 139.524326ms to StartCluster
	I1018 10:35:48.467152  495391 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:48.467216  495391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:35:48.468223  495391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:35:48.468465  495391 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:35:48.468802  495391 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:35:48.468876  495391 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-577403"
	I1018 10:35:48.468892  495391 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-577403"
	W1018 10:35:48.468903  495391 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:35:48.468923  495391 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:48.469552  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.469959  495391 config.go:182] Loaded profile config "newest-cni-577403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:35:48.470017  495391 addons.go:69] Setting default-storageclass=true in profile "newest-cni-577403"
	I1018 10:35:48.470036  495391 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-577403"
	I1018 10:35:48.470298  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.472740  495391 addons.go:69] Setting dashboard=true in profile "newest-cni-577403"
	I1018 10:35:48.472770  495391 addons.go:238] Setting addon dashboard=true in "newest-cni-577403"
	W1018 10:35:48.472778  495391 addons.go:247] addon dashboard should already be in state true
	I1018 10:35:48.472812  495391 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:48.473342  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.473871  495391 out.go:179] * Verifying Kubernetes components...
	I1018 10:35:48.480751  495391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:35:48.536239  495391 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 10:35:48.536368  495391 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:35:48.539243  495391 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:48.539278  495391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:35:48.539248  495391 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 10:35:48.539348  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:48.541507  495391 addons.go:238] Setting addon default-storageclass=true in "newest-cni-577403"
	W1018 10:35:48.541533  495391 addons.go:247] addon default-storageclass should already be in state true
	I1018 10:35:48.541557  495391 host.go:66] Checking if "newest-cni-577403" exists ...
	I1018 10:35:48.541970  495391 cli_runner.go:164] Run: docker container inspect newest-cni-577403 --format={{.State.Status}}
	I1018 10:35:48.542272  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 10:35:48.542291  495391 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 10:35:48.542346  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:48.586169  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:48.606874  495391 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:48.606900  495391 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:35:48.606963  495391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-577403
	I1018 10:35:48.616866  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:48.642774  495391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/newest-cni-577403/id_rsa Username:docker}
	I1018 10:35:48.785117  495391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:35:48.820842  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 10:35:48.820864  495391 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 10:35:48.845950  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 10:35:48.845972  495391 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 10:35:48.869686  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 10:35:48.869707  495391 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 10:35:48.899843  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 10:35:48.899925  495391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 10:35:48.922442  495391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:35:48.951187  495391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:35:48.992259  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 10:35:48.992336  495391 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 10:35:49.090634  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 10:35:49.090658  495391 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 10:35:49.154528  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 10:35:49.154552  495391 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 10:35:49.196258  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 10:35:49.196290  495391 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 10:35:49.219095  495391 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:35:49.219135  495391 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 10:35:49.245405  495391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:35:54.520126  495391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.734915278s)
	I1018 10:35:54.520198  495391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.597676781s)
	I1018 10:35:54.520518  495391 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.569265757s)
	I1018 10:35:54.520558  495391 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:35:54.520617  495391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:35:54.520760  495391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.2752767s)
	I1018 10:35:54.523991  495391 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-577403 addons enable metrics-server
	
	I1018 10:35:54.544161  495391 api_server.go:72] duration metric: took 6.07565302s to wait for apiserver process to appear ...
	I1018 10:35:54.544183  495391 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:35:54.544201  495391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:35:54.557901  495391 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 10:35:54.557975  495391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 10:35:54.567339  495391 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 10:35:54.570380  495391 addons.go:514] duration metric: took 6.101573061s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 10:35:55.045029  495391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 10:35:55.053858  495391 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 10:35:55.055336  495391 api_server.go:141] control plane version: v1.34.1
	I1018 10:35:55.055366  495391 api_server.go:131] duration metric: took 511.176614ms to wait for apiserver health ...
	I1018 10:35:55.055377  495391 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:35:55.060646  495391 system_pods.go:59] 8 kube-system pods found
	I1018 10:35:55.060696  495391 system_pods.go:61] "coredns-66bc5c9577-g5hjd" [d8506151-9057-4d64-9951-94bfc8e48157] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:35:55.060706  495391 system_pods.go:61] "etcd-newest-cni-577403" [9061973a-4cc4-4701-ac68-b463a5c36efe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:35:55.060711  495391 system_pods.go:61] "kindnet-dc6mn" [59b45574-ece2-4376-aacf-8e87cb8f03e7] Running
	I1018 10:35:55.060719  495391 system_pods.go:61] "kube-apiserver-newest-cni-577403" [bfab2b0b-ff85-4eb8-8e64-157577d51881] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:35:55.060730  495391 system_pods.go:61] "kube-controller-manager-newest-cni-577403" [0ffcd3ef-9adb-437c-9c04-32638238a83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:35:55.060738  495391 system_pods.go:61] "kube-proxy-4twn2" [060f019f-35b3-47a0-af70-f480829d1715] Running
	I1018 10:35:55.060744  495391 system_pods.go:61] "kube-scheduler-newest-cni-577403" [6c0fc2df-7ebe-4634-828f-7febca31dffc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:35:55.060767  495391 system_pods.go:61] "storage-provisioner" [2f727e8b-afd6-4e3e-96f3-a9d649d239ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 10:35:55.060774  495391 system_pods.go:74] duration metric: took 5.391812ms to wait for pod list to return data ...
	I1018 10:35:55.060789  495391 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:35:55.063384  495391 default_sa.go:45] found service account: "default"
	I1018 10:35:55.063414  495391 default_sa.go:55] duration metric: took 2.617329ms for default service account to be created ...
	I1018 10:35:55.063436  495391 kubeadm.go:586] duration metric: took 6.594929328s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 10:35:55.063456  495391 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:35:55.068366  495391 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:35:55.068419  495391 node_conditions.go:123] node cpu capacity is 2
	I1018 10:35:55.068499  495391 node_conditions.go:105] duration metric: took 4.971713ms to run NodePressure ...
	I1018 10:35:55.068518  495391 start.go:241] waiting for startup goroutines ...
	I1018 10:35:55.068526  495391 start.go:246] waiting for cluster config update ...
	I1018 10:35:55.068541  495391 start.go:255] writing updated cluster config ...
	I1018 10:35:55.068886  495391 ssh_runner.go:195] Run: rm -f paused
	I1018 10:35:55.161466  495391 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:35:55.164855  495391 out.go:179] * Done! kubectl is now configured to use "newest-cni-577403" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 10:35:45 no-preload-027087 crio[840]: time="2025-10-18T10:35:45.785964803Z" level=info msg="Starting container: 38f4139c05d6c17fdb38b6c0669c661d8ade1c2b4362fc44864b0eb91d48045d" id=6946cc97-4268-403a-9127-4163da38f416 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:35:45 no-preload-027087 crio[840]: time="2025-10-18T10:35:45.788946148Z" level=info msg="Started container" PID=2502 containerID=38f4139c05d6c17fdb38b6c0669c661d8ade1c2b4362fc44864b0eb91d48045d description=kube-system/storage-provisioner/storage-provisioner id=6946cc97-4268-403a-9127-4163da38f416 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c3d13fde85ebccc075cb76fefca322d90a973283a10f0de1b49d4a2f7693253
	Oct 18 10:35:45 no-preload-027087 crio[840]: time="2025-10-18T10:35:45.795240543Z" level=info msg="Started container" PID=2503 containerID=88fa0541d1a09cb6489a0c052dffe5d301e26d78f64d55bb262a6b9d2831345e description=kube-system/coredns-66bc5c9577-wt4wd/coredns id=65a508cb-55fd-4164-b8b1-e5203ec72065 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a5c6e23f476db2457febe74ef3c34be10524084b28f127c6c573620a55894cd
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.456987712Z" level=info msg="Running pod sandbox: default/busybox/POD" id=caa4a171-bc7b-477f-8c32-0a25f764e8b9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.457061772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.477883329Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:32946fd2ba6c063f9c162531c37d82f6d7ebd9e2aa0bfd942959c75817f8cb5c UID:df688ec3-f32c-4bdb-8846-fe0eeaff3436 NetNS:/var/run/netns/a4ac78d0-780e-41b2-995d-c7b95107f183 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b450}] Aliases:map[]}"
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.478059224Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.517414753Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:32946fd2ba6c063f9c162531c37d82f6d7ebd9e2aa0bfd942959c75817f8cb5c UID:df688ec3-f32c-4bdb-8846-fe0eeaff3436 NetNS:/var/run/netns/a4ac78d0-780e-41b2-995d-c7b95107f183 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b450}] Aliases:map[]}"
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.517568593Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.52873623Z" level=info msg="Ran pod sandbox 32946fd2ba6c063f9c162531c37d82f6d7ebd9e2aa0bfd942959c75817f8cb5c with infra container: default/busybox/POD" id=caa4a171-bc7b-477f-8c32-0a25f764e8b9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.551070223Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ecef98d7-2ca4-40a0-a69e-5fd848768392 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.551203419Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ecef98d7-2ca4-40a0-a69e-5fd848768392 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.551241426Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ecef98d7-2ca4-40a0-a69e-5fd848768392 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.557584511Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bce5c48d-7dcd-4a7c-a608-e765b648c57f name=/runtime.v1.ImageService/PullImage
	Oct 18 10:35:48 no-preload-027087 crio[840]: time="2025-10-18T10:35:48.559522069Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.677320292Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=bce5c48d-7dcd-4a7c-a608-e765b648c57f name=/runtime.v1.ImageService/PullImage
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.678280689Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6181de96-2d16-4669-8eb3-d3102a934362 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.679991603Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=654ac1cc-32f1-4c96-a0d6-0e245921e8b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.686550551Z" level=info msg="Creating container: default/busybox/busybox" id=c46293bf-f2b6-4472-8a24-93e51f57286f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.687555388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.692430402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.693053811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.715024141Z" level=info msg="Created container c13c5a5251059e853d463782bc4cde2c33caaaaaae2d0d97a50b9d77ffc9f1bf: default/busybox/busybox" id=c46293bf-f2b6-4472-8a24-93e51f57286f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.718329454Z" level=info msg="Starting container: c13c5a5251059e853d463782bc4cde2c33caaaaaae2d0d97a50b9d77ffc9f1bf" id=0551d714-fddc-4f19-98b8-307e4abe4364 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:35:50 no-preload-027087 crio[840]: time="2025-10-18T10:35:50.725824677Z" level=info msg="Started container" PID=2562 containerID=c13c5a5251059e853d463782bc4cde2c33caaaaaae2d0d97a50b9d77ffc9f1bf description=default/busybox/busybox id=0551d714-fddc-4f19-98b8-307e4abe4364 name=/runtime.v1.RuntimeService/StartContainer sandboxID=32946fd2ba6c063f9c162531c37d82f6d7ebd9e2aa0bfd942959c75817f8cb5c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c13c5a5251059       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   32946fd2ba6c0       busybox                                     default
	88fa0541d1a09       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   4a5c6e23f476d       coredns-66bc5c9577-wt4wd                    kube-system
	38f4139c05d6c       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   7c3d13fde85eb       storage-provisioner                         kube-system
	eb1c43d5f851a       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   ababb063da5c3       kindnet-t9q5g                               kube-system
	6034cf8c27c7b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   b08d42d1585eb       kube-proxy-s87k4                            kube-system
	9c04e5967f289       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   2c2d44eeea19f       kube-apiserver-no-preload-027087            kube-system
	123413008ee39       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   9a04f6bb61a49       etcd-no-preload-027087                      kube-system
	5e2c8f7d87098       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   d797aa00c95e8       kube-controller-manager-no-preload-027087   kube-system
	1b9942bda6eb8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   602cf8ac449af       kube-scheduler-no-preload-027087            kube-system
	
	
	==> coredns [88fa0541d1a09cb6489a0c052dffe5d301e26d78f64d55bb262a6b9d2831345e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36281 - 2727 "HINFO IN 8833190434456535013.5945174401268464897. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019299724s
	
	
	==> describe nodes <==
	Name:               no-preload-027087
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-027087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=no-preload-027087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_35_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:35:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-027087
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:35:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:35:54 +0000   Sat, 18 Oct 2025 10:35:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:35:54 +0000   Sat, 18 Oct 2025 10:35:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:35:54 +0000   Sat, 18 Oct 2025 10:35:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:35:54 +0000   Sat, 18 Oct 2025 10:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-027087
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                bcb80226-a3a4-43ba-81ed-aa5457f89057
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-wt4wd                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-027087                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-t9q5g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-027087             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-027087    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-s87k4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-027087             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-027087 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-027087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node no-preload-027087 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-027087 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-027087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-027087 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-027087 event: Registered Node no-preload-027087 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-027087 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 10:16] overlayfs: idmapped layers are currently not supported
	[  +1.944912] overlayfs: idmapped layers are currently not supported
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [123413008ee39b0eaf55f2292394593c906c1e3e8311d0491f034cd39dfec875] <==
	{"level":"warn","ts":"2025-10-18T10:35:17.613213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.642576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.677495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.709808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.737892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.790758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.829403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.868390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.911463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.928066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.952524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:17.981885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.004304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.049424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.152970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.217574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.257248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.317656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.353565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.398546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.494747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.520918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.595424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.618770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:35:18.729438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55858","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:35:58 up  2:18,  0 user,  load average: 4.18, 4.27, 3.38
	Linux no-preload-027087 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eb1c43d5f851a8437bdd51e2386233855449cd6b69a8aa00aad6f5fbf8bd42dd] <==
	I1018 10:35:34.623120       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:35:34.623501       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:35:34.623656       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:35:34.623697       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:35:34.623736       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:35:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:35:34.914044       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:35:34.914183       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:35:34.914219       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:35:34.914996       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 10:35:35.115199       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:35:35.115339       1 metrics.go:72] Registering metrics
	I1018 10:35:35.115492       1 controller.go:711] "Syncing nftables rules"
	I1018 10:35:44.914458       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:35:44.914578       1 main.go:301] handling current node
	I1018 10:35:54.914981       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:35:54.915021       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9c04e5967f28962d469a7538b094ac4956fd0c91537ce93f30c78aad04b30b9e] <==
	I1018 10:35:20.154429       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:35:20.190954       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:35:20.191580       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 10:35:20.246515       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:35:20.246705       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:35:20.250950       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 10:35:20.347122       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:35:20.752465       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 10:35:20.768899       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 10:35:20.769675       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:35:22.004871       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:35:22.085524       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:35:22.259507       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 10:35:22.271975       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 10:35:22.273175       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:35:22.278772       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 10:35:22.784938       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:35:23.458267       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:35:23.495361       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 10:35:23.510501       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 10:35:28.670825       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:35:28.678687       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:35:28.712283       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:35:28.980318       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1018 10:35:56.347221       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:55378: use of closed network connection
	
	
	==> kube-controller-manager [5e2c8f7d870989a9cafdfce890c12dcd4222f5b811be021c970627b46e595535] <==
	I1018 10:35:27.807310       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:35:27.809634       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 10:35:27.817330       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:35:27.819689       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 10:35:27.821781       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:35:27.822922       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 10:35:27.823923       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-027087" podCIDRs=["10.244.0.0/24"]
	I1018 10:35:27.824039       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 10:35:27.824850       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 10:35:27.825054       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 10:35:27.825516       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 10:35:27.825550       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 10:35:27.825565       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 10:35:27.826644       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 10:35:27.830378       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 10:35:27.830790       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:35:27.833228       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 10:35:27.833328       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 10:35:27.833418       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-027087"
	I1018 10:35:27.833472       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 10:35:27.833528       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 10:35:27.839432       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:35:27.839469       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 10:35:27.839480       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 10:35:47.841417       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6034cf8c27c7b0bd5ef99f681fc8b5e1c0587b033d0d724aee4a0ecab2bc326a] <==
	I1018 10:35:30.824989       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:35:30.918790       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:35:31.019060       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:35:31.019171       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:35:31.019302       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:35:31.055088       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:35:31.055220       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:35:31.060808       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:35:31.061168       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:35:31.061240       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:35:31.062601       1 config.go:200] "Starting service config controller"
	I1018 10:35:31.062623       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:35:31.062643       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:35:31.062648       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:35:31.062659       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:35:31.062665       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:35:31.063394       1 config.go:309] "Starting node config controller"
	I1018 10:35:31.063415       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:35:31.063422       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:35:31.163013       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:35:31.163024       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 10:35:31.163064       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1b9942bda6eb8fada7dd8a4bdbab298b6c66ae57a747dc03b866a5d14c616e61] <==
	I1018 10:35:21.215443       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:35:21.228159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:35:21.232759       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 10:35:21.263845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1018 10:35:21.232808       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:35:21.271126       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1018 10:35:21.270822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 10:35:21.270876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 10:35:21.270931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 10:35:21.270967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 10:35:21.270768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 10:35:21.277485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 10:35:21.277926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 10:35:21.277969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 10:35:21.278005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 10:35:21.278061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 10:35:21.278105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 10:35:21.278182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 10:35:21.278237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 10:35:21.278354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 10:35:21.278393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 10:35:21.278420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 10:35:21.278435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 10:35:21.278450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1018 10:35:22.471984       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: E1018 10:35:29.217566    2025 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-027087\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-027087' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: E1018 10:35:29.217651    2025 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-s87k4\" is forbidden: User \"system:node:no-preload-027087\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-027087' and this object" podUID="2e127631-8e09-43da-8d5a-7238894eedac" pod="kube-system/kube-proxy-s87k4"
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: I1018 10:35:29.302695    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e127631-8e09-43da-8d5a-7238894eedac-lib-modules\") pod \"kube-proxy-s87k4\" (UID: \"2e127631-8e09-43da-8d5a-7238894eedac\") " pod="kube-system/kube-proxy-s87k4"
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: I1018 10:35:29.302951    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e127631-8e09-43da-8d5a-7238894eedac-xtables-lock\") pod \"kube-proxy-s87k4\" (UID: \"2e127631-8e09-43da-8d5a-7238894eedac\") " pod="kube-system/kube-proxy-s87k4"
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: I1018 10:35:29.302999    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4286ff28-6eca-4678-9d54-3a2dbe9bf8d1-xtables-lock\") pod \"kindnet-t9q5g\" (UID: \"4286ff28-6eca-4678-9d54-3a2dbe9bf8d1\") " pod="kube-system/kindnet-t9q5g"
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: I1018 10:35:29.303021    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4286ff28-6eca-4678-9d54-3a2dbe9bf8d1-lib-modules\") pod \"kindnet-t9q5g\" (UID: \"4286ff28-6eca-4678-9d54-3a2dbe9bf8d1\") " pod="kube-system/kindnet-t9q5g"
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: I1018 10:35:29.303041    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e127631-8e09-43da-8d5a-7238894eedac-kube-proxy\") pod \"kube-proxy-s87k4\" (UID: \"2e127631-8e09-43da-8d5a-7238894eedac\") " pod="kube-system/kube-proxy-s87k4"
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: I1018 10:35:29.303399    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfmrk\" (UniqueName: \"kubernetes.io/projected/2e127631-8e09-43da-8d5a-7238894eedac-kube-api-access-cfmrk\") pod \"kube-proxy-s87k4\" (UID: \"2e127631-8e09-43da-8d5a-7238894eedac\") " pod="kube-system/kube-proxy-s87k4"
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: I1018 10:35:29.303496    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4286ff28-6eca-4678-9d54-3a2dbe9bf8d1-cni-cfg\") pod \"kindnet-t9q5g\" (UID: \"4286ff28-6eca-4678-9d54-3a2dbe9bf8d1\") " pod="kube-system/kindnet-t9q5g"
	Oct 18 10:35:29 no-preload-027087 kubelet[2025]: I1018 10:35:29.303546    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjkps\" (UniqueName: \"kubernetes.io/projected/4286ff28-6eca-4678-9d54-3a2dbe9bf8d1-kube-api-access-bjkps\") pod \"kindnet-t9q5g\" (UID: \"4286ff28-6eca-4678-9d54-3a2dbe9bf8d1\") " pod="kube-system/kindnet-t9q5g"
	Oct 18 10:35:30 no-preload-027087 kubelet[2025]: I1018 10:35:30.522262    2025 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 10:35:30 no-preload-027087 kubelet[2025]: W1018 10:35:30.802671    2025 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/crio-ababb063da5c3ed3811b9ce885109fcacefe6b3cfdea9345d90069b402d94e43 WatchSource:0}: Error finding container ababb063da5c3ed3811b9ce885109fcacefe6b3cfdea9345d90069b402d94e43: Status 404 returned error can't find the container with id ababb063da5c3ed3811b9ce885109fcacefe6b3cfdea9345d90069b402d94e43
	Oct 18 10:35:30 no-preload-027087 kubelet[2025]: I1018 10:35:30.886408    2025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s87k4" podStartSLOduration=1.88638288 podStartE2EDuration="1.88638288s" podCreationTimestamp="2025-10-18 10:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:35:30.886295839 +0000 UTC m=+7.476647530" watchObservedRunningTime="2025-10-18 10:35:30.88638288 +0000 UTC m=+7.476734571"
	Oct 18 10:35:34 no-preload-027087 kubelet[2025]: I1018 10:35:34.918781    2025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-t9q5g" podStartSLOduration=2.236380104 podStartE2EDuration="5.918762463s" podCreationTimestamp="2025-10-18 10:35:29 +0000 UTC" firstStartedPulling="2025-10-18 10:35:30.807452374 +0000 UTC m=+7.397804073" lastFinishedPulling="2025-10-18 10:35:34.489834733 +0000 UTC m=+11.080186432" observedRunningTime="2025-10-18 10:35:34.900628441 +0000 UTC m=+11.490980140" watchObservedRunningTime="2025-10-18 10:35:34.918762463 +0000 UTC m=+11.509114154"
	Oct 18 10:35:45 no-preload-027087 kubelet[2025]: I1018 10:35:45.231780    2025 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 10:35:45 no-preload-027087 kubelet[2025]: I1018 10:35:45.463921    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff570964-d787-4c47-a498-4ac05ed09b0a-config-volume\") pod \"coredns-66bc5c9577-wt4wd\" (UID: \"ff570964-d787-4c47-a498-4ac05ed09b0a\") " pod="kube-system/coredns-66bc5c9577-wt4wd"
	Oct 18 10:35:45 no-preload-027087 kubelet[2025]: I1018 10:35:45.464001    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b6343f75-ba5e-48f6-8eec-5343cabc28a4-tmp\") pod \"storage-provisioner\" (UID: \"b6343f75-ba5e-48f6-8eec-5343cabc28a4\") " pod="kube-system/storage-provisioner"
	Oct 18 10:35:45 no-preload-027087 kubelet[2025]: I1018 10:35:45.464038    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpx4q\" (UniqueName: \"kubernetes.io/projected/b6343f75-ba5e-48f6-8eec-5343cabc28a4-kube-api-access-fpx4q\") pod \"storage-provisioner\" (UID: \"b6343f75-ba5e-48f6-8eec-5343cabc28a4\") " pod="kube-system/storage-provisioner"
	Oct 18 10:35:45 no-preload-027087 kubelet[2025]: I1018 10:35:45.464064    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bww8h\" (UniqueName: \"kubernetes.io/projected/ff570964-d787-4c47-a498-4ac05ed09b0a-kube-api-access-bww8h\") pod \"coredns-66bc5c9577-wt4wd\" (UID: \"ff570964-d787-4c47-a498-4ac05ed09b0a\") " pod="kube-system/coredns-66bc5c9577-wt4wd"
	Oct 18 10:35:45 no-preload-027087 kubelet[2025]: W1018 10:35:45.688483    2025 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/crio-7c3d13fde85ebccc075cb76fefca322d90a973283a10f0de1b49d4a2f7693253 WatchSource:0}: Error finding container 7c3d13fde85ebccc075cb76fefca322d90a973283a10f0de1b49d4a2f7693253: Status 404 returned error can't find the container with id 7c3d13fde85ebccc075cb76fefca322d90a973283a10f0de1b49d4a2f7693253
	Oct 18 10:35:45 no-preload-027087 kubelet[2025]: W1018 10:35:45.710033    2025 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/crio-4a5c6e23f476db2457febe74ef3c34be10524084b28f127c6c573620a55894cd WatchSource:0}: Error finding container 4a5c6e23f476db2457febe74ef3c34be10524084b28f127c6c573620a55894cd: Status 404 returned error can't find the container with id 4a5c6e23f476db2457febe74ef3c34be10524084b28f127c6c573620a55894cd
	Oct 18 10:35:45 no-preload-027087 kubelet[2025]: I1018 10:35:45.997821    2025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.997803398 podStartE2EDuration="15.997803398s" podCreationTimestamp="2025-10-18 10:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:35:45.949467043 +0000 UTC m=+22.539818742" watchObservedRunningTime="2025-10-18 10:35:45.997803398 +0000 UTC m=+22.588155097"
	Oct 18 10:35:48 no-preload-027087 kubelet[2025]: I1018 10:35:48.145495    2025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wt4wd" podStartSLOduration=19.145470787 podStartE2EDuration="19.145470787s" podCreationTimestamp="2025-10-18 10:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 10:35:45.999633977 +0000 UTC m=+22.589985668" watchObservedRunningTime="2025-10-18 10:35:48.145470787 +0000 UTC m=+24.735822486"
	Oct 18 10:35:48 no-preload-027087 kubelet[2025]: I1018 10:35:48.195178    2025 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dsxj\" (UniqueName: \"kubernetes.io/projected/df688ec3-f32c-4bdb-8846-fe0eeaff3436-kube-api-access-5dsxj\") pod \"busybox\" (UID: \"df688ec3-f32c-4bdb-8846-fe0eeaff3436\") " pod="default/busybox"
	Oct 18 10:35:48 no-preload-027087 kubelet[2025]: W1018 10:35:48.530813    2025 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/crio-32946fd2ba6c063f9c162531c37d82f6d7ebd9e2aa0bfd942959c75817f8cb5c WatchSource:0}: Error finding container 32946fd2ba6c063f9c162531c37d82f6d7ebd9e2aa0bfd942959c75817f8cb5c: Status 404 returned error can't find the container with id 32946fd2ba6c063f9c162531c37d82f6d7ebd9e2aa0bfd942959c75817f8cb5c
	
	
	==> storage-provisioner [38f4139c05d6c17fdb38b6c0669c661d8ade1c2b4362fc44864b0eb91d48045d] <==
	I1018 10:35:45.847221       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 10:35:45.899926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:35:45.900620       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 10:35:45.905919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:45.926831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:35:45.927107       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:35:45.931170       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2f18fe6-030e-454a-877d-bce5a2ea2a3e", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-027087_52627bab-8582-4fe1-a5b8-f0ad43679b81 became leader
	I1018 10:35:45.937829       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-027087_52627bab-8582-4fe1-a5b8-f0ad43679b81!
	W1018 10:35:45.952407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:45.984179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:35:46.038100       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-027087_52627bab-8582-4fe1-a5b8-f0ad43679b81!
	W1018 10:35:47.987442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:47.997737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:50.001246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:50.006137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:52.014518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:52.021799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:54.025450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:54.031266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:56.035054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:56.042085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:58.048066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:35:58.058820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-027087 -n no-preload-027087
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-027087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-027087 --alsologtostderr -v=1
E1018 10:37:28.222001  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-027087 --alsologtostderr -v=1: exit status 80 (1.931283872s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-027087 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:37:26.625641  503935 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:37:26.625758  503935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:37:26.625769  503935 out.go:374] Setting ErrFile to fd 2...
	I1018 10:37:26.625776  503935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:37:26.626068  503935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:37:26.626309  503935 out.go:368] Setting JSON to false
	I1018 10:37:26.626331  503935 mustload.go:65] Loading cluster: no-preload-027087
	I1018 10:37:26.626702  503935 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:37:26.627182  503935 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:37:26.646506  503935 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:37:26.646833  503935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:37:26.709320  503935 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 10:37:26.699288228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:37:26.710030  503935 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-027087 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 10:37:26.713416  503935 out.go:179] * Pausing node no-preload-027087 ... 
	I1018 10:37:26.716314  503935 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:37:26.716669  503935 ssh_runner.go:195] Run: systemctl --version
	I1018 10:37:26.716719  503935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:37:26.736341  503935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:37:26.840012  503935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:37:26.852844  503935 pause.go:52] kubelet running: true
	I1018 10:37:26.852961  503935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:37:27.088578  503935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:37:27.088693  503935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:37:27.173633  503935 cri.go:89] found id: "2cbdb2a8528e4250452cbcfde4d0a6d774dfa919eece0abfe3baf1ff93f2c38d"
	I1018 10:37:27.173657  503935 cri.go:89] found id: "4b114ce56de2ff36fd41657a70702954670fd16b567eaf13b39d0991c0e0a02b"
	I1018 10:37:27.173663  503935 cri.go:89] found id: "0de3795567e7dc2268ccf4ed71cc0a8ca7702aa8ac6ca751af108c5769adf6aa"
	I1018 10:37:27.173678  503935 cri.go:89] found id: "6868199e0f045baf4d0c7a7f0f549c97259e341becc1e091f19130b6f1755866"
	I1018 10:37:27.173699  503935 cri.go:89] found id: "c22c014947e9e9dc024d5d72f215ef4605e6ee6ca05a8753ddd66dd51ee9561c"
	I1018 10:37:27.173708  503935 cri.go:89] found id: "d968383151da802efc708a7893731beff322978e9b5c6aca61c66a9890a4c2a7"
	I1018 10:37:27.173713  503935 cri.go:89] found id: "5238dbc53ff79046c10165b63aa29a7982380bb94f85339a7f129ae1992c4868"
	I1018 10:37:27.173721  503935 cri.go:89] found id: "e261c5b0adde6796a1e7af7d2200022c257e6f59c693f0219b6f283cde6d5b44"
	I1018 10:37:27.173725  503935 cri.go:89] found id: "7fcb9a21d1a3177d9033d4cb769bd9b7f55c25b4643124089cf2f78a928074e9"
	I1018 10:37:27.173737  503935 cri.go:89] found id: "e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7"
	I1018 10:37:27.173746  503935 cri.go:89] found id: "9919fe4eee7dc51c131498b9e1e50e76edc9753040feea7bff2ec0193354e184"
	I1018 10:37:27.173749  503935 cri.go:89] found id: ""
	I1018 10:37:27.173812  503935 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:37:27.193007  503935 retry.go:31] will retry after 279.901883ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:37:27.473620  503935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:37:27.487407  503935 pause.go:52] kubelet running: false
	I1018 10:37:27.487469  503935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:37:27.662823  503935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:37:27.662920  503935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:37:27.731138  503935 cri.go:89] found id: "2cbdb2a8528e4250452cbcfde4d0a6d774dfa919eece0abfe3baf1ff93f2c38d"
	I1018 10:37:27.731164  503935 cri.go:89] found id: "4b114ce56de2ff36fd41657a70702954670fd16b567eaf13b39d0991c0e0a02b"
	I1018 10:37:27.731170  503935 cri.go:89] found id: "0de3795567e7dc2268ccf4ed71cc0a8ca7702aa8ac6ca751af108c5769adf6aa"
	I1018 10:37:27.731174  503935 cri.go:89] found id: "6868199e0f045baf4d0c7a7f0f549c97259e341becc1e091f19130b6f1755866"
	I1018 10:37:27.731177  503935 cri.go:89] found id: "c22c014947e9e9dc024d5d72f215ef4605e6ee6ca05a8753ddd66dd51ee9561c"
	I1018 10:37:27.731181  503935 cri.go:89] found id: "d968383151da802efc708a7893731beff322978e9b5c6aca61c66a9890a4c2a7"
	I1018 10:37:27.731184  503935 cri.go:89] found id: "5238dbc53ff79046c10165b63aa29a7982380bb94f85339a7f129ae1992c4868"
	I1018 10:37:27.731187  503935 cri.go:89] found id: "e261c5b0adde6796a1e7af7d2200022c257e6f59c693f0219b6f283cde6d5b44"
	I1018 10:37:27.731193  503935 cri.go:89] found id: "7fcb9a21d1a3177d9033d4cb769bd9b7f55c25b4643124089cf2f78a928074e9"
	I1018 10:37:27.731200  503935 cri.go:89] found id: "e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7"
	I1018 10:37:27.731203  503935 cri.go:89] found id: "9919fe4eee7dc51c131498b9e1e50e76edc9753040feea7bff2ec0193354e184"
	I1018 10:37:27.731207  503935 cri.go:89] found id: ""
	I1018 10:37:27.731265  503935 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:37:27.742074  503935 retry.go:31] will retry after 475.63892ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:37:28.218690  503935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:37:28.232613  503935 pause.go:52] kubelet running: false
	I1018 10:37:28.232718  503935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 10:37:28.402543  503935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 10:37:28.402664  503935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 10:37:28.473787  503935 cri.go:89] found id: "2cbdb2a8528e4250452cbcfde4d0a6d774dfa919eece0abfe3baf1ff93f2c38d"
	I1018 10:37:28.473811  503935 cri.go:89] found id: "4b114ce56de2ff36fd41657a70702954670fd16b567eaf13b39d0991c0e0a02b"
	I1018 10:37:28.473817  503935 cri.go:89] found id: "0de3795567e7dc2268ccf4ed71cc0a8ca7702aa8ac6ca751af108c5769adf6aa"
	I1018 10:37:28.473821  503935 cri.go:89] found id: "6868199e0f045baf4d0c7a7f0f549c97259e341becc1e091f19130b6f1755866"
	I1018 10:37:28.473825  503935 cri.go:89] found id: "c22c014947e9e9dc024d5d72f215ef4605e6ee6ca05a8753ddd66dd51ee9561c"
	I1018 10:37:28.473829  503935 cri.go:89] found id: "d968383151da802efc708a7893731beff322978e9b5c6aca61c66a9890a4c2a7"
	I1018 10:37:28.473832  503935 cri.go:89] found id: "5238dbc53ff79046c10165b63aa29a7982380bb94f85339a7f129ae1992c4868"
	I1018 10:37:28.473836  503935 cri.go:89] found id: "e261c5b0adde6796a1e7af7d2200022c257e6f59c693f0219b6f283cde6d5b44"
	I1018 10:37:28.473839  503935 cri.go:89] found id: "7fcb9a21d1a3177d9033d4cb769bd9b7f55c25b4643124089cf2f78a928074e9"
	I1018 10:37:28.473855  503935 cri.go:89] found id: "e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7"
	I1018 10:37:28.473862  503935 cri.go:89] found id: "9919fe4eee7dc51c131498b9e1e50e76edc9753040feea7bff2ec0193354e184"
	I1018 10:37:28.473865  503935 cri.go:89] found id: ""
	I1018 10:37:28.473913  503935 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 10:37:28.488640  503935 out.go:203] 
	W1018 10:37:28.491435  503935 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:37:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:37:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 10:37:28.491454  503935 out.go:285] * 
	* 
	W1018 10:37:28.498510  503935 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 10:37:28.501387  503935 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-027087 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-027087
helpers_test.go:243: (dbg) docker inspect no-preload-027087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75",
	        "Created": "2025-10-18T10:34:32.909990218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:36:13.029786514Z",
	            "FinishedAt": "2025-10-18T10:36:11.262531109Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/hostname",
	        "HostsPath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/hosts",
	        "LogPath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75-json.log",
	        "Name": "/no-preload-027087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-027087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-027087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75",
	                "LowerDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-027087",
	                "Source": "/var/lib/docker/volumes/no-preload-027087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-027087",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-027087",
	                "name.minikube.sigs.k8s.io": "no-preload-027087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7348f34b1bfc071dee859e95d1c9cda99310e2d60e64e7203a706f5df8aac4b3",
	            "SandboxKey": "/var/run/docker/netns/7348f34b1bfc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-027087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:80:31:e8:b5:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a54e6a9010d18200cc9cc9a9c81fbb30eaec85d99c1ec1614afefa1f14d2cb",
	                    "EndpointID": "64f7f96362fc70aa6b8f6ffbf3feed7693751d5cc1a79c52cd5ec47b23cd9478",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-027087",
	                        "f282a9c13400"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027087 -n no-preload-027087
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027087 -n no-preload-027087: exit status 2 (347.974511ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-027087 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-027087 logs -n 25: (1.404963445s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-922359                                                                                                                                                                                                               │ disable-driver-mounts-922359 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ embed-certs-101897 image list --format=json                                                                                                                                                                                                   │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p embed-certs-101897 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-577403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ stop    │ -p newest-cni-577403 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-577403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ newest-cni-577403 image list --format=json                                                                                                                                                                                                    │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ pause   │ -p newest-cni-577403 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-027087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-027087 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:36 UTC │
	│ delete  │ -p newest-cni-577403                                                                                                                                                                                                                          │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │ 18 Oct 25 10:36 UTC │
	│ delete  │ -p newest-cni-577403                                                                                                                                                                                                                          │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │ 18 Oct 25 10:36 UTC │
	│ start   │ -p auto-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-881658                  │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-027087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │ 18 Oct 25 10:36 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │ 18 Oct 25 10:37 UTC │
	│ image   │ no-preload-027087 image list --format=json                                                                                                                                                                                                    │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:37 UTC │ 18 Oct 25 10:37 UTC │
	│ pause   │ -p no-preload-027087 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:36:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:36:12.513246  500152 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:36:12.513414  500152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:36:12.513436  500152 out.go:374] Setting ErrFile to fd 2...
	I1018 10:36:12.513456  500152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:36:12.513722  500152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:36:12.514100  500152 out.go:368] Setting JSON to false
	I1018 10:36:12.514974  500152 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8323,"bootTime":1760775450,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:36:12.516581  500152 start.go:141] virtualization:  
	I1018 10:36:12.520623  500152 out.go:179] * [no-preload-027087] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:36:12.523578  500152 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:36:12.523666  500152 notify.go:220] Checking for updates...
	I1018 10:36:12.529352  500152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:36:12.532695  500152 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:36:12.536034  500152 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:36:12.538082  500152 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:36:12.540966  500152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:36:12.544250  500152 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:12.544795  500152 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:36:12.619250  500152 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:36:12.619382  500152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:36:12.758925  500152 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-18 10:36:12.749588162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:36:12.759064  500152 docker.go:318] overlay module found
	I1018 10:36:12.762318  500152 out.go:179] * Using the docker driver based on existing profile
	I1018 10:36:12.765208  500152 start.go:305] selected driver: docker
	I1018 10:36:12.765225  500152 start.go:925] validating driver "docker" against &{Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:36:12.765369  500152 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:36:12.766017  500152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:36:12.913520  500152 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-18 10:36:12.89861497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:36:12.913850  500152 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:36:12.913885  500152 cni.go:84] Creating CNI manager for ""
	I1018 10:36:12.913949  500152 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:36:12.913992  500152 start.go:349] cluster config:
	{Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:36:12.917403  500152 out.go:179] * Starting "no-preload-027087" primary control-plane node in "no-preload-027087" cluster
	I1018 10:36:12.921285  500152 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:36:12.924392  500152 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:36:12.927216  500152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:36:12.927375  500152 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/config.json ...
	I1018 10:36:12.927698  500152 cache.go:107] acquiring lock: {Name:mkaf3d4648d07ea61f5c43b4ac6cff6e96e07d0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.927776  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 10:36:12.927788  500152 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 97.24µs
	I1018 10:36:12.927801  500152 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 10:36:12.927817  500152 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:36:12.928077  500152 cache.go:107] acquiring lock: {Name:mkce90ae98faaf046844c77feccd02a8c89b22bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928148  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 10:36:12.928157  500152 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 86.065µs
	I1018 10:36:12.928164  500152 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 10:36:12.928175  500152 cache.go:107] acquiring lock: {Name:mkaa713f6c6c749f7890994ea47ccb489ab7b76a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928205  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 10:36:12.928210  500152 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.881µs
	I1018 10:36:12.928216  500152 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 10:36:12.928228  500152 cache.go:107] acquiring lock: {Name:mkbf154924b5d05f1add0f80d2d8992cab46ca22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928262  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 10:36:12.928267  500152 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 43.341µs
	I1018 10:36:12.928273  500152 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 10:36:12.928288  500152 cache.go:107] acquiring lock: {Name:mk7c500c022aee187177cdcb3e6cd138895cc689 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928316  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 10:36:12.928321  500152 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.222µs
	I1018 10:36:12.928327  500152 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 10:36:12.928337  500152 cache.go:107] acquiring lock: {Name:mk8d87cb313c81485b1cabba19862a22e85903db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928362  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 10:36:12.928367  500152 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.106µs
	I1018 10:36:12.928373  500152 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 10:36:12.928381  500152 cache.go:107] acquiring lock: {Name:mkf60d23fd6f24668b2e7aa1b277366e0a8c4f15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928406  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 10:36:12.928429  500152 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 48.624µs
	I1018 10:36:12.928436  500152 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 10:36:12.928446  500152 cache.go:107] acquiring lock: {Name:mk79330e484fcb6a5af61229914c16bea91c5633 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928473  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 10:36:12.928478  500152 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 33.322µs
	I1018 10:36:12.928483  500152 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 10:36:12.928490  500152 cache.go:87] Successfully saved all images to host disk.
	I1018 10:36:12.957402  500152 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:36:12.957421  500152 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:36:12.957439  500152 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:36:12.957461  500152 start.go:360] acquireMachinesLock for no-preload-027087: {Name:mk3407a2c92d7e64b372433da7fc52893eca365e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.957512  500152 start.go:364] duration metric: took 36.448µs to acquireMachinesLock for "no-preload-027087"
	I1018 10:36:12.957539  500152 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:36:12.957545  500152 fix.go:54] fixHost starting: 
	I1018 10:36:12.957805  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:12.987068  500152 fix.go:112] recreateIfNeeded on no-preload-027087: state=Stopped err=<nil>
	W1018 10:36:12.987112  500152 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 10:36:10.998643  499205 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-881658:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.417064984s)
	I1018 10:36:10.998676  499205 kic.go:203] duration metric: took 4.417219176s to extract preloaded images to volume ...
	W1018 10:36:10.998812  499205 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:36:10.998916  499205 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:36:11.054675  499205 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-881658 --name auto-881658 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-881658 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-881658 --network auto-881658 --ip 192.168.85.2 --volume auto-881658:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:36:11.430992  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Running}}
	I1018 10:36:11.458648  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:11.491991  499205 cli_runner.go:164] Run: docker exec auto-881658 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:36:11.563398  499205 oci.go:144] the created container "auto-881658" has a running status.
	I1018 10:36:11.563437  499205 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa...
	I1018 10:36:13.621436  499205 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:36:13.654848  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:13.680443  499205 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:36:13.680464  499205 kic_runner.go:114] Args: [docker exec --privileged auto-881658 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:36:13.725903  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:13.748663  499205 machine.go:93] provisionDockerMachine start ...
	I1018 10:36:13.748766  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:13.781757  499205 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:13.782084  499205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1018 10:36:13.782093  499205 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:36:13.949147  499205 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-881658
	
	I1018 10:36:13.949237  499205 ubuntu.go:182] provisioning hostname "auto-881658"
	I1018 10:36:13.949354  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:13.967284  499205 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:13.967594  499205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1018 10:36:13.967605  499205 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-881658 && echo "auto-881658" | sudo tee /etc/hostname
	I1018 10:36:14.127078  499205 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-881658
	
	I1018 10:36:14.127175  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:14.144386  499205 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:14.144699  499205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1018 10:36:14.144716  499205 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-881658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-881658/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-881658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:36:14.289382  499205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:36:14.289412  499205 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:36:14.289436  499205 ubuntu.go:190] setting up certificates
	I1018 10:36:14.289447  499205 provision.go:84] configureAuth start
	I1018 10:36:14.289511  499205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-881658
	I1018 10:36:14.306221  499205 provision.go:143] copyHostCerts
	I1018 10:36:14.306286  499205 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:36:14.306299  499205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:36:14.306380  499205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:36:14.306484  499205 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:36:14.306495  499205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:36:14.306522  499205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:36:14.306757  499205 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:36:14.306770  499205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:36:14.306801  499205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:36:14.306867  499205 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.auto-881658 san=[127.0.0.1 192.168.85.2 auto-881658 localhost minikube]
	I1018 10:36:14.368446  499205 provision.go:177] copyRemoteCerts
	I1018 10:36:14.368521  499205 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:36:14.368563  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:14.392355  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:14.497028  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 10:36:14.514630  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:36:14.533328  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:36:14.554203  499205 provision.go:87] duration metric: took 264.739955ms to configureAuth
	I1018 10:36:14.554232  499205 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:36:14.554421  499205 config.go:182] Loaded profile config "auto-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:14.554531  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:14.571498  499205 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:14.571817  499205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1018 10:36:14.571838  499205 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:36:14.854453  499205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:36:14.854474  499205 machine.go:96] duration metric: took 1.105791654s to provisionDockerMachine
	I1018 10:36:14.854484  499205 client.go:171] duration metric: took 8.955276861s to LocalClient.Create
	I1018 10:36:14.854497  499205 start.go:167] duration metric: took 8.955355836s to libmachine.API.Create "auto-881658"
	I1018 10:36:14.854503  499205 start.go:293] postStartSetup for "auto-881658" (driver="docker")
	I1018 10:36:14.854517  499205 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:36:14.854615  499205 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:36:14.854655  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:14.882879  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:14.998269  499205 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:36:15.001855  499205 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:36:15.001885  499205 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:36:15.001896  499205 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:36:15.001954  499205 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:36:15.002042  499205 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:36:15.002143  499205 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:36:15.012845  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:36:15.040344  499205 start.go:296] duration metric: took 185.825601ms for postStartSetup
	I1018 10:36:15.040794  499205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-881658
	I1018 10:36:15.066475  499205 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/config.json ...
	I1018 10:36:15.066788  499205 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:36:15.066848  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:15.085616  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:15.190999  499205 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:36:15.196349  499205 start.go:128] duration metric: took 9.300879285s to createHost
	I1018 10:36:15.196415  499205 start.go:83] releasing machines lock for "auto-881658", held for 9.301053736s
	I1018 10:36:15.196492  499205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-881658
	I1018 10:36:15.214254  499205 ssh_runner.go:195] Run: cat /version.json
	I1018 10:36:15.214307  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:15.214378  499205 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:36:15.214451  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:15.240906  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:15.242768  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:15.441280  499205 ssh_runner.go:195] Run: systemctl --version
	I1018 10:36:15.448051  499205 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:36:15.486311  499205 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:36:15.490630  499205 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:36:15.490709  499205 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:36:15.520479  499205 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:36:15.520501  499205 start.go:495] detecting cgroup driver to use...
	I1018 10:36:15.520535  499205 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:36:15.520587  499205 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:36:15.539233  499205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:36:15.552493  499205 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:36:15.552560  499205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:36:15.570392  499205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:36:15.589687  499205 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:36:15.710179  499205 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:36:15.838031  499205 docker.go:234] disabling docker service ...
	I1018 10:36:15.838120  499205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:36:15.860504  499205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:36:15.874170  499205 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:36:15.993544  499205 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:36:16.115533  499205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:36:16.129630  499205 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:36:16.144953  499205 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:36:16.145056  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.154614  499205 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:36:16.154732  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.163985  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.173047  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.181560  499205 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:36:16.189988  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.198593  499205 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.212255  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.221515  499205 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:36:16.230495  499205 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:36:16.238001  499205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:16.354408  499205 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:36:16.471884  499205 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:36:16.471951  499205 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:36:16.475904  499205 start.go:563] Will wait 60s for crictl version
	I1018 10:36:16.475967  499205 ssh_runner.go:195] Run: which crictl
	I1018 10:36:16.480088  499205 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:36:16.514558  499205 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:36:16.514648  499205 ssh_runner.go:195] Run: crio --version
	I1018 10:36:16.556698  499205 ssh_runner.go:195] Run: crio --version
	I1018 10:36:16.595952  499205 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:36:12.990546  500152 out.go:252] * Restarting existing docker container for "no-preload-027087" ...
	I1018 10:36:12.990652  500152 cli_runner.go:164] Run: docker start no-preload-027087
	I1018 10:36:13.323768  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:13.380633  500152 kic.go:430] container "no-preload-027087" state is running.
	I1018 10:36:13.381045  500152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-027087
	I1018 10:36:13.431920  500152 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/config.json ...
	I1018 10:36:13.432153  500152 machine.go:93] provisionDockerMachine start ...
	I1018 10:36:13.432217  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:13.487902  500152 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:13.488489  500152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1018 10:36:13.488507  500152 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:36:13.489237  500152 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 10:36:16.640901  500152 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-027087
	
	I1018 10:36:16.640931  500152 ubuntu.go:182] provisioning hostname "no-preload-027087"
	I1018 10:36:16.640999  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:16.662683  500152 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:16.662985  500152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1018 10:36:16.662997  500152 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-027087 && echo "no-preload-027087" | sudo tee /etc/hostname
	I1018 10:36:16.835669  500152 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-027087
	
	I1018 10:36:16.835744  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:16.862526  500152 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:16.862839  500152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1018 10:36:16.862861  500152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-027087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-027087/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-027087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:36:17.037390  500152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:36:17.037476  500152 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:36:17.037539  500152 ubuntu.go:190] setting up certificates
	I1018 10:36:17.037569  500152 provision.go:84] configureAuth start
	I1018 10:36:17.037657  500152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-027087
	I1018 10:36:17.081444  500152 provision.go:143] copyHostCerts
	I1018 10:36:17.081516  500152 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:36:17.081533  500152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:36:17.081619  500152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:36:17.081718  500152 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:36:17.081724  500152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:36:17.081749  500152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:36:17.081807  500152 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:36:17.081812  500152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:36:17.081834  500152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:36:17.081887  500152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.no-preload-027087 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-027087]
	I1018 10:36:17.222922  500152 provision.go:177] copyRemoteCerts
	I1018 10:36:17.222994  500152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:36:17.223041  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:17.250752  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:17.357810  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:36:17.378671  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 10:36:17.399534  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 10:36:17.420028  500152 provision.go:87] duration metric: took 382.408017ms to configureAuth
	I1018 10:36:17.420055  500152 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:36:17.420240  500152 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:17.420345  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:17.445017  500152 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:17.445385  500152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1018 10:36:17.445401  500152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:36:17.813337  500152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:36:17.813360  500152 machine.go:96] duration metric: took 4.381198152s to provisionDockerMachine
	I1018 10:36:17.813370  500152 start.go:293] postStartSetup for "no-preload-027087" (driver="docker")
	I1018 10:36:17.813388  500152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:36:17.813450  500152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:36:17.813501  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:17.839821  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:17.951800  500152 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:36:17.966626  500152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:36:17.966669  500152 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:36:17.966682  500152 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:36:17.966737  500152 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:36:17.966823  500152 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:36:17.966924  500152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:36:17.983427  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:36:18.003814  500152 start.go:296] duration metric: took 190.428486ms for postStartSetup
	I1018 10:36:18.003907  500152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:36:18.003951  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:18.024321  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:18.131375  500152 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:36:18.137524  500152 fix.go:56] duration metric: took 5.179967219s for fixHost
	I1018 10:36:18.137554  500152 start.go:83] releasing machines lock for "no-preload-027087", held for 5.180033575s
	I1018 10:36:18.137634  500152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-027087
	I1018 10:36:18.161547  500152 ssh_runner.go:195] Run: cat /version.json
	I1018 10:36:18.161596  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:18.161904  500152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:36:18.161957  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:18.189114  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:18.213696  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:18.298125  500152 ssh_runner.go:195] Run: systemctl --version
	I1018 10:36:18.398935  500152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:36:18.447456  500152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:36:18.452376  500152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:36:18.452444  500152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:36:18.461247  500152 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:36:18.461266  500152 start.go:495] detecting cgroup driver to use...
	I1018 10:36:18.461298  500152 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:36:18.461343  500152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:36:18.477966  500152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:36:18.492821  500152 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:36:18.492887  500152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:36:18.509786  500152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:36:18.524941  500152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:36:18.693699  500152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:36:18.871643  500152 docker.go:234] disabling docker service ...
	I1018 10:36:18.871719  500152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:36:18.889151  500152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:36:18.903683  500152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:36:19.056017  500152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:36:19.215315  500152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:36:19.233408  500152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:36:19.250114  500152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:36:19.250176  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.259922  500152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:36:19.259996  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.271936  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.287905  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.299843  500152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:36:19.308849  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.319593  500152 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.328579  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.337966  500152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:36:19.346409  500152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:36:19.354624  500152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:19.535684  500152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:36:19.692699  500152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:36:19.692763  500152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:36:19.697513  500152 start.go:563] Will wait 60s for crictl version
	I1018 10:36:19.697633  500152 ssh_runner.go:195] Run: which crictl
	I1018 10:36:19.701351  500152 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:36:19.736875  500152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:36:19.737014  500152 ssh_runner.go:195] Run: crio --version
	I1018 10:36:19.783523  500152 ssh_runner.go:195] Run: crio --version
	I1018 10:36:19.840854  500152 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:36:16.598601  499205 cli_runner.go:164] Run: docker network inspect auto-881658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:36:16.614915  499205 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:36:16.618866  499205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:36:16.629524  499205 kubeadm.go:883] updating cluster {Name:auto-881658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-881658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:36:16.629649  499205 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:36:16.629715  499205 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:36:16.680595  499205 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:36:16.680616  499205 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:36:16.680672  499205 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:36:16.708696  499205 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:36:16.708715  499205 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:36:16.708722  499205 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:36:16.708812  499205 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-881658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-881658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:36:16.708894  499205 ssh_runner.go:195] Run: crio config
	I1018 10:36:16.804428  499205 cni.go:84] Creating CNI manager for ""
	I1018 10:36:16.804493  499205 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:36:16.804525  499205 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:36:16.804580  499205 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-881658 NodeName:auto-881658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:36:16.804737  499205 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-881658"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:36:16.804823  499205 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:36:16.813149  499205 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:36:16.813276  499205 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:36:16.821873  499205 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1018 10:36:16.842468  499205 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:36:16.857664  499205 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1018 10:36:16.878543  499205 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:36:16.883000  499205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:36:16.894012  499205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:17.031944  499205 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:36:17.054932  499205 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658 for IP: 192.168.85.2
	I1018 10:36:17.054950  499205 certs.go:195] generating shared ca certs ...
	I1018 10:36:17.054967  499205 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:17.055107  499205 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:36:17.055147  499205 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:36:17.055154  499205 certs.go:257] generating profile certs ...
	I1018 10:36:17.055207  499205 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.key
	I1018 10:36:17.055218  499205 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt with IP's: []
	I1018 10:36:17.316717  499205 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt ...
	I1018 10:36:17.316801  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: {Name:mk2a0e2efcca901b177388430f644cc2a3c5a78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:17.317100  499205 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.key ...
	I1018 10:36:17.317149  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.key: {Name:mk4edfdbb2cf2fe230e74bbb751a4acd670dd51c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:17.317336  499205 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key.65aa3f66
	I1018 10:36:17.317388  499205 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt.65aa3f66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 10:36:18.801774  499205 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt.65aa3f66 ...
	I1018 10:36:18.801808  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt.65aa3f66: {Name:mkd38a34570225be4b64c5f2be447acccfbd44e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:18.801994  499205 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key.65aa3f66 ...
	I1018 10:36:18.802011  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key.65aa3f66: {Name:mkc28f42b1bf8858fb9964bdbb8e42e82affed28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:18.802094  499205 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt.65aa3f66 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt
	I1018 10:36:18.802195  499205 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key.65aa3f66 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key
	I1018 10:36:18.802258  499205 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.key
	I1018 10:36:18.802277  499205 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.crt with IP's: []
	I1018 10:36:18.926807  499205 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.crt ...
	I1018 10:36:18.926838  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.crt: {Name:mk3c01d16ed59ea21230b79d5cc98161fde9be21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:18.927064  499205 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.key ...
	I1018 10:36:18.927083  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.key: {Name:mkf68824996920ea57e33eb17f89bcff14154bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:18.927289  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:36:18.927332  499205 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:36:18.927348  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:36:18.927375  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:36:18.927407  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:36:18.927464  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:36:18.927516  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:36:18.928170  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:36:18.956571  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:36:18.984576  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:36:19.007061  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:36:19.033887  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 10:36:19.061049  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 10:36:19.081608  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:36:19.103275  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:36:19.136688  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:36:19.159511  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:36:19.179437  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:36:19.204935  499205 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:36:19.222914  499205 ssh_runner.go:195] Run: openssl version
	I1018 10:36:19.231477  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:36:19.241016  499205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:36:19.245457  499205 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:36:19.245521  499205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:36:19.288652  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:36:19.297705  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:36:19.306907  499205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:36:19.311875  499205 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:36:19.311942  499205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:36:19.355475  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:36:19.364864  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:36:19.374268  499205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:19.377945  499205 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:19.378053  499205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:19.438423  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:36:19.457981  499205 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:36:19.462655  499205 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:36:19.462752  499205 kubeadm.go:400] StartCluster: {Name:auto-881658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-881658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:36:19.462884  499205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:36:19.462978  499205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:36:19.533512  499205 cri.go:89] found id: ""
	I1018 10:36:19.533631  499205 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:36:19.545135  499205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:36:19.557621  499205 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:36:19.557743  499205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:36:19.567322  499205 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:36:19.567393  499205 kubeadm.go:157] found existing configuration files:
	
	I1018 10:36:19.567474  499205 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 10:36:19.576098  499205 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:36:19.576238  499205 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:36:19.584212  499205 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 10:36:19.596572  499205 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:36:19.596687  499205 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:36:19.604614  499205 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 10:36:19.613423  499205 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:36:19.613548  499205 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:36:19.621438  499205 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 10:36:19.631884  499205 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:36:19.632005  499205 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:36:19.639802  499205 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:36:19.691444  499205 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 10:36:19.694923  499205 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:36:19.744457  499205 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:36:19.744531  499205 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:36:19.744568  499205 kubeadm.go:318] OS: Linux
	I1018 10:36:19.744615  499205 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:36:19.744666  499205 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:36:19.744715  499205 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:36:19.744766  499205 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:36:19.744818  499205 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:36:19.744868  499205 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:36:19.744915  499205 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:36:19.744966  499205 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:36:19.745014  499205 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:36:19.843030  499205 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:36:19.843145  499205 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:36:19.843240  499205 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 10:36:19.856397  499205 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:36:19.861512  499205 out.go:252]   - Generating certificates and keys ...
	I1018 10:36:19.861612  499205 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:36:19.861688  499205 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:36:20.221962  499205 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:36:20.581559  499205 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:36:19.843744  500152 cli_runner.go:164] Run: docker network inspect no-preload-027087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:36:19.881244  500152 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:36:19.885898  500152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:36:19.899341  500152 kubeadm.go:883] updating cluster {Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:36:19.899462  500152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:36:19.899507  500152 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:36:19.933061  500152 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:36:19.933081  500152 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:36:19.933088  500152 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 10:36:19.933197  500152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-027087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:36:19.933271  500152 ssh_runner.go:195] Run: crio config
	I1018 10:36:20.001503  500152 cni.go:84] Creating CNI manager for ""
	I1018 10:36:20.001578  500152 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:36:20.001613  500152 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:36:20.001662  500152 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-027087 NodeName:no-preload-027087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:36:20.001861  500152 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-027087"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:36:20.001977  500152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:36:20.011252  500152 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:36:20.011451  500152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:36:20.022705  500152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 10:36:20.042187  500152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:36:20.058850  500152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 10:36:20.072736  500152 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:36:20.077102  500152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:36:20.087936  500152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:20.246619  500152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:36:20.264026  500152 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087 for IP: 192.168.76.2
	I1018 10:36:20.264045  500152 certs.go:195] generating shared ca certs ...
	I1018 10:36:20.264060  500152 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:20.264200  500152 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:36:20.264238  500152 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:36:20.264245  500152 certs.go:257] generating profile certs ...
	I1018 10:36:20.264330  500152 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.key
	I1018 10:36:20.264409  500152 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key.1343fb15
	I1018 10:36:20.264447  500152 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.key
	I1018 10:36:20.264568  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:36:20.264596  500152 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:36:20.264604  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:36:20.264626  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:36:20.264646  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:36:20.264674  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:36:20.264719  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:36:20.265413  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:36:20.315597  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:36:20.338998  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:36:20.369643  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:36:20.422480  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 10:36:20.486631  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:36:20.549203  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:36:20.571908  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:36:20.602909  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:36:20.628638  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:36:20.652428  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:36:20.671976  500152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:36:20.693166  500152 ssh_runner.go:195] Run: openssl version
	I1018 10:36:20.699853  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:36:20.709034  500152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:36:20.713138  500152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:36:20.713213  500152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:36:20.756816  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:36:20.765681  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:36:20.774695  500152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:36:20.779079  500152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:36:20.779142  500152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:36:20.820608  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:36:20.829099  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:36:20.839416  500152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:20.845421  500152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:20.845482  500152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:20.902444  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:36:20.912799  500152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:36:20.917453  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:36:20.960454  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:36:21.042936  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:36:21.145480  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:36:21.292567  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:36:21.371121  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:36:21.463336  500152 kubeadm.go:400] StartCluster: {Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:36:21.463426  500152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:36:21.463499  500152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:36:21.516278  500152 cri.go:89] found id: "d968383151da802efc708a7893731beff322978e9b5c6aca61c66a9890a4c2a7"
	I1018 10:36:21.516303  500152 cri.go:89] found id: "5238dbc53ff79046c10165b63aa29a7982380bb94f85339a7f129ae1992c4868"
	I1018 10:36:21.516318  500152 cri.go:89] found id: "e261c5b0adde6796a1e7af7d2200022c257e6f59c693f0219b6f283cde6d5b44"
	I1018 10:36:21.516322  500152 cri.go:89] found id: "7fcb9a21d1a3177d9033d4cb769bd9b7f55c25b4643124089cf2f78a928074e9"
	I1018 10:36:21.516325  500152 cri.go:89] found id: ""
	I1018 10:36:21.516376  500152 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:36:21.545505  500152 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:36:21Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:36:21.545591  500152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:36:21.560421  500152 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:36:21.560441  500152 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:36:21.560494  500152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:36:21.572706  500152 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:36:21.573154  500152 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-027087" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:36:21.573291  500152 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-293333/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-027087" cluster setting kubeconfig missing "no-preload-027087" context setting]
	I1018 10:36:21.573610  500152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:21.574883  500152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:36:21.600607  500152 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 10:36:21.600642  500152 kubeadm.go:601] duration metric: took 40.194519ms to restartPrimaryControlPlane
	I1018 10:36:21.600651  500152 kubeadm.go:402] duration metric: took 137.325537ms to StartCluster
	I1018 10:36:21.600666  500152 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:21.600730  500152 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:36:21.601418  500152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:21.601642  500152 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:36:21.602029  500152 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:36:21.602111  500152 addons.go:69] Setting storage-provisioner=true in profile "no-preload-027087"
	I1018 10:36:21.602125  500152 addons.go:238] Setting addon storage-provisioner=true in "no-preload-027087"
	W1018 10:36:21.602130  500152 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:36:21.602156  500152 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:36:21.602581  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:21.602924  500152 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:21.602988  500152 addons.go:69] Setting dashboard=true in profile "no-preload-027087"
	I1018 10:36:21.602997  500152 addons.go:238] Setting addon dashboard=true in "no-preload-027087"
	W1018 10:36:21.603004  500152 addons.go:247] addon dashboard should already be in state true
	I1018 10:36:21.603026  500152 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:36:21.603420  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:21.607257  500152 addons.go:69] Setting default-storageclass=true in profile "no-preload-027087"
	I1018 10:36:21.607462  500152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-027087"
	I1018 10:36:21.607815  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:21.608025  500152 out.go:179] * Verifying Kubernetes components...
	I1018 10:36:21.611900  500152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:21.652909  500152 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:36:21.656144  500152 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:36:21.656164  500152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:36:21.656224  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:21.663592  500152 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 10:36:21.669077  500152 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 10:36:21.671482  500152 addons.go:238] Setting addon default-storageclass=true in "no-preload-027087"
	W1018 10:36:21.671503  500152 addons.go:247] addon default-storageclass should already be in state true
	I1018 10:36:21.671527  500152 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:36:21.671957  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:21.674152  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 10:36:21.674187  500152 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 10:36:21.674258  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:21.714602  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:21.721372  500152 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:36:21.721396  500152 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:36:21.721458  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:21.723156  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:21.756835  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:22.089029  500152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:36:22.102299  500152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:36:22.230507  500152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:36:22.250831  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 10:36:22.250905  500152 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 10:36:22.332071  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 10:36:22.332146  500152 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 10:36:22.463788  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 10:36:22.463876  500152 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 10:36:22.489086  499205 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:36:22.872596  499205 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:36:23.605550  499205 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:36:23.605687  499205 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-881658 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:36:24.300304  499205 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:36:24.302462  499205 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-881658 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:36:24.739941  499205 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:36:24.912011  499205 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:36:25.561698  499205 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:36:25.562044  499205 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:36:22.566649  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 10:36:22.566719  500152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 10:36:22.622447  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 10:36:22.622524  500152 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 10:36:22.667085  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 10:36:22.667162  500152 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 10:36:22.693280  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 10:36:22.693353  500152 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 10:36:22.731688  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 10:36:22.731760  500152 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 10:36:22.783712  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:36:22.783785  500152 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 10:36:22.822191  500152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:36:25.933651  499205 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:36:27.206792  499205 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 10:36:27.613568  499205 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:36:28.517656  499205 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:36:29.816956  499205 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:36:29.817823  499205 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:36:29.820721  499205 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:36:29.824447  499205 out.go:252]   - Booting up control plane ...
	I1018 10:36:29.824554  499205 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:36:29.824636  499205 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:36:29.825649  499205 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:36:29.848733  499205 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:36:29.848847  499205 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 10:36:29.862958  499205 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 10:36:29.863064  499205 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:36:29.863106  499205 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:36:30.077052  499205 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 10:36:30.077201  499205 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 10:36:32.663979  500152 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.574865424s)
	I1018 10:36:32.664037  500152 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.561667021s)
	I1018 10:36:32.664068  500152 node_ready.go:35] waiting up to 6m0s for node "no-preload-027087" to be "Ready" ...
	I1018 10:36:32.664388  500152 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.433807112s)
	I1018 10:36:32.712056  500152 node_ready.go:49] node "no-preload-027087" is "Ready"
	I1018 10:36:32.712083  500152 node_ready.go:38] duration metric: took 47.994237ms for node "no-preload-027087" to be "Ready" ...
	I1018 10:36:32.712096  500152 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:36:32.712158  500152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:36:33.209656  500152 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.387374417s)
	I1018 10:36:33.209921  500152 api_server.go:72] duration metric: took 11.608245627s to wait for apiserver process to appear ...
	I1018 10:36:33.209972  500152 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:36:33.210027  500152 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:36:33.213029  500152 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-027087 addons enable metrics-server
	
	I1018 10:36:33.215887  500152 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 10:36:32.078666  499205 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001750584s
	I1018 10:36:32.095169  499205 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 10:36:32.095279  499205 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 10:36:32.095378  499205 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 10:36:32.095465  499205 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 10:36:33.218709  500152 addons.go:514] duration metric: took 11.616667913s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 10:36:33.239721  500152 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 10:36:33.240735  500152 api_server.go:141] control plane version: v1.34.1
	I1018 10:36:33.240757  500152 api_server.go:131] duration metric: took 30.756643ms to wait for apiserver health ...
	I1018 10:36:33.240766  500152 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:36:33.245607  500152 system_pods.go:59] 8 kube-system pods found
	I1018 10:36:33.245704  500152 system_pods.go:61] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:36:33.245731  500152 system_pods.go:61] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:36:33.245771  500152 system_pods.go:61] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:36:33.245797  500152 system_pods.go:61] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:36:33.245821  500152 system_pods.go:61] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:36:33.245858  500152 system_pods.go:61] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:36:33.245884  500152 system_pods.go:61] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:36:33.245904  500152 system_pods.go:61] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Running
	I1018 10:36:33.245939  500152 system_pods.go:74] duration metric: took 5.167032ms to wait for pod list to return data ...
	I1018 10:36:33.245963  500152 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:36:33.259167  500152 default_sa.go:45] found service account: "default"
	I1018 10:36:33.259190  500152 default_sa.go:55] duration metric: took 13.210425ms for default service account to be created ...
	I1018 10:36:33.259199  500152 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:36:33.263107  500152 system_pods.go:86] 8 kube-system pods found
	I1018 10:36:33.263188  500152 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:36:33.263213  500152 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:36:33.263233  500152 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:36:33.263282  500152 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:36:33.263306  500152 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:36:33.263343  500152 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:36:33.263367  500152 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:36:33.263386  500152 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Running
	I1018 10:36:33.263423  500152 system_pods.go:126] duration metric: took 4.203574ms to wait for k8s-apps to be running ...
	I1018 10:36:33.263448  500152 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:36:33.263535  500152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:36:33.299892  500152 system_svc.go:56] duration metric: took 36.436052ms WaitForService to wait for kubelet
	I1018 10:36:33.299970  500152 kubeadm.go:586] duration metric: took 11.698293925s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:36:33.300002  500152 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:36:33.310360  500152 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:36:33.310439  500152 node_conditions.go:123] node cpu capacity is 2
	I1018 10:36:33.310467  500152 node_conditions.go:105] duration metric: took 10.443678ms to run NodePressure ...
	I1018 10:36:33.310491  500152 start.go:241] waiting for startup goroutines ...
	I1018 10:36:33.310524  500152 start.go:246] waiting for cluster config update ...
	I1018 10:36:33.310553  500152 start.go:255] writing updated cluster config ...
	I1018 10:36:33.310887  500152 ssh_runner.go:195] Run: rm -f paused
	I1018 10:36:33.321602  500152 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:36:33.326077  500152 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 10:36:35.333108  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:36:36.244160  499205 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.149079229s
	I1018 10:36:40.593427  499205 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.497898011s
	I1018 10:36:42.096870  499205 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.001959644s
	I1018 10:36:42.136510  499205 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:36:42.167918  499205 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:36:42.194207  499205 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:36:42.194431  499205 kubeadm.go:318] [mark-control-plane] Marking the node auto-881658 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:36:42.214798  499205 kubeadm.go:318] [bootstrap-token] Using token: crgxz9.45dtxljsereikmmm
	W1018 10:36:37.835460  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:40.333748  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:36:42.218059  499205 out.go:252]   - Configuring RBAC rules ...
	I1018 10:36:42.218197  499205 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:36:42.231650  499205 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:36:42.251200  499205 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:36:42.261375  499205 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:36:42.267581  499205 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:36:42.273829  499205 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:36:42.505679  499205 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:36:43.055114  499205 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:36:43.507581  499205 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:36:43.509429  499205 kubeadm.go:318] 
	I1018 10:36:43.509520  499205 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:36:43.509530  499205 kubeadm.go:318] 
	I1018 10:36:43.509613  499205 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:36:43.509622  499205 kubeadm.go:318] 
	I1018 10:36:43.509649  499205 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:36:43.509715  499205 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:36:43.509772  499205 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:36:43.509780  499205 kubeadm.go:318] 
	I1018 10:36:43.509837  499205 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:36:43.509845  499205 kubeadm.go:318] 
	I1018 10:36:43.509895  499205 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:36:43.509903  499205 kubeadm.go:318] 
	I1018 10:36:43.509963  499205 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:36:43.510048  499205 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:36:43.510123  499205 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:36:43.510132  499205 kubeadm.go:318] 
	I1018 10:36:43.510226  499205 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:36:43.510311  499205 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:36:43.510320  499205 kubeadm.go:318] 
	I1018 10:36:43.510409  499205 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token crgxz9.45dtxljsereikmmm \
	I1018 10:36:43.510520  499205 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:36:43.510545  499205 kubeadm.go:318] 	--control-plane 
	I1018 10:36:43.510553  499205 kubeadm.go:318] 
	I1018 10:36:43.510642  499205 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:36:43.510650  499205 kubeadm.go:318] 
	I1018 10:36:43.510735  499205 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token crgxz9.45dtxljsereikmmm \
	I1018 10:36:43.510842  499205 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:36:43.517542  499205 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 10:36:43.517797  499205 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:36:43.517914  499205 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:36:43.517999  499205 cni.go:84] Creating CNI manager for ""
	I1018 10:36:43.518029  499205 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:36:43.524710  499205 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:36:43.527727  499205 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:36:43.541278  499205 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 10:36:43.541298  499205 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:36:43.580786  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:36:44.698201  499205 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.117383107s)
	I1018 10:36:44.698287  499205 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:36:44.698453  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:44.698571  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-881658 minikube.k8s.io/updated_at=2025_10_18T10_36_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=auto-881658 minikube.k8s.io/primary=true
	I1018 10:36:45.069959  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:45.069801  499205 ops.go:34] apiserver oom_adj: -16
	I1018 10:36:45.570786  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1018 10:36:42.834507  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:44.835883  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:46.840519  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:36:46.070728  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:46.570417  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:47.070896  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:47.570347  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:48.070033  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:48.273864  499205 kubeadm.go:1113] duration metric: took 3.575459639s to wait for elevateKubeSystemPrivileges
	I1018 10:36:48.273889  499205 kubeadm.go:402] duration metric: took 28.811141561s to StartCluster
	I1018 10:36:48.273906  499205 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:48.273971  499205 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:36:48.274948  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:48.275167  499205 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:36:48.275297  499205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:36:48.275550  499205 config.go:182] Loaded profile config "auto-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:48.275581  499205 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:36:48.275640  499205 addons.go:69] Setting storage-provisioner=true in profile "auto-881658"
	I1018 10:36:48.275654  499205 addons.go:238] Setting addon storage-provisioner=true in "auto-881658"
	I1018 10:36:48.275675  499205 host.go:66] Checking if "auto-881658" exists ...
	I1018 10:36:48.276198  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:48.276728  499205 addons.go:69] Setting default-storageclass=true in profile "auto-881658"
	I1018 10:36:48.276749  499205 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-881658"
	I1018 10:36:48.277021  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:48.284253  499205 out.go:179] * Verifying Kubernetes components...
	I1018 10:36:48.289825  499205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:48.321296  499205 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:36:48.322819  499205 addons.go:238] Setting addon default-storageclass=true in "auto-881658"
	I1018 10:36:48.322855  499205 host.go:66] Checking if "auto-881658" exists ...
	I1018 10:36:48.323262  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:48.327234  499205 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:36:48.327255  499205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:36:48.327324  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:48.373244  499205 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:36:48.373266  499205 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:36:48.373331  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:48.380137  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:48.408055  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:48.867256  499205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:36:48.899392  499205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:36:49.102593  499205 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:36:49.102705  499205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:36:50.316487  499205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.449147954s)
	I1018 10:36:50.316590  499205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.417123425s)
	I1018 10:36:50.316649  499205 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.214034135s)
	I1018 10:36:50.316614  499205 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.213882346s)
	I1018 10:36:50.317732  499205 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 10:36:50.319169  499205 node_ready.go:35] waiting up to 15m0s for node "auto-881658" to be "Ready" ...
	I1018 10:36:50.368896  499205 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 10:36:50.371700  499205 addons.go:514] duration metric: took 2.096097811s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1018 10:36:49.338083  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:51.832877  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:36:50.821589  499205 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-881658" context rescaled to 1 replicas
	W1018 10:36:52.321968  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:36:54.822742  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:36:54.332057  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:56.832819  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:57.322488  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:36:59.822256  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:36:59.332007  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:01.833085  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:01.824782  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:04.322785  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:04.331680  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:06.832643  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:06.823069  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:09.322019  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:09.331177  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:11.333175  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:37:13.332180  500152 pod_ready.go:94] pod "coredns-66bc5c9577-wt4wd" is "Ready"
	I1018 10:37:13.332213  500152 pod_ready.go:86] duration metric: took 40.0061067s for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.335472  500152 pod_ready.go:83] waiting for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.340149  500152 pod_ready.go:94] pod "etcd-no-preload-027087" is "Ready"
	I1018 10:37:13.340177  500152 pod_ready.go:86] duration metric: took 4.675377ms for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.342433  500152 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.347302  500152 pod_ready.go:94] pod "kube-apiserver-no-preload-027087" is "Ready"
	I1018 10:37:13.347331  500152 pod_ready.go:86] duration metric: took 4.869488ms for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.349923  500152 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.530669  500152 pod_ready.go:94] pod "kube-controller-manager-no-preload-027087" is "Ready"
	I1018 10:37:13.530700  500152 pod_ready.go:86] duration metric: took 180.750984ms for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.730767  500152 pod_ready.go:83] waiting for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:14.130717  500152 pod_ready.go:94] pod "kube-proxy-s87k4" is "Ready"
	I1018 10:37:14.130751  500152 pod_ready.go:86] duration metric: took 399.906388ms for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:14.330101  500152 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:14.730141  500152 pod_ready.go:94] pod "kube-scheduler-no-preload-027087" is "Ready"
	I1018 10:37:14.730166  500152 pod_ready.go:86] duration metric: took 400.040489ms for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:14.730179  500152 pod_ready.go:40] duration metric: took 41.408544188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:37:14.784915  500152 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:37:14.787968  500152 out.go:179] * Done! kubectl is now configured to use "no-preload-027087" cluster and "default" namespace by default
	W1018 10:37:11.323097  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:13.822000  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:15.822375  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:18.322607  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:20.822930  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:23.322012  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.57465359Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a279d2a-73e8-4a02-8ba0-1e3c3bebeba4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.575988205Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=60c2199b-ac06-4d78-ae93-89501f90d60d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.576960451Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2/dashboard-metrics-scraper" id=d7f77df7-5988-46dd-97ca-b7e6dac77387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.577267483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.584972355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.586421933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.603773464Z" level=info msg="Created container e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2/dashboard-metrics-scraper" id=d7f77df7-5988-46dd-97ca-b7e6dac77387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.604489559Z" level=info msg="Starting container: e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7" id=f0e528e5-9411-4e55-8f3b-16fba2552037 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.606293956Z" level=info msg="Started container" PID=1634 containerID=e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2/dashboard-metrics-scraper id=f0e528e5-9411-4e55-8f3b-16fba2552037 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f95c9b99798ada6c0d78df5de706f078e3cb01fa10d1142a4b3d1642eb78d602
	Oct 18 10:37:07 no-preload-027087 conmon[1632]: conmon e007cc5af2d622f31ced <ninfo>: container 1634 exited with status 1
	Oct 18 10:37:08 no-preload-027087 crio[651]: time="2025-10-18T10:37:08.145095157Z" level=info msg="Removing container: 4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742" id=dbf6634f-ad67-4c57-a62b-84ea67e0c507 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:37:08 no-preload-027087 crio[651]: time="2025-10-18T10:37:08.167973398Z" level=info msg="Error loading conmon cgroup of container 4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742: cgroup deleted" id=dbf6634f-ad67-4c57-a62b-84ea67e0c507 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:37:08 no-preload-027087 crio[651]: time="2025-10-18T10:37:08.175788139Z" level=info msg="Removed container 4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2/dashboard-metrics-scraper" id=dbf6634f-ad67-4c57-a62b-84ea67e0c507 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.670975146Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.678468947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.678505699Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.67852942Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.682644961Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.682685241Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.682711457Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.685933645Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.685968386Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.685992394Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.689435582Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.68947006Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e007cc5af2d62       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   f95c9b99798ad       dashboard-metrics-scraper-6ffb444bf9-vvmt2   kubernetes-dashboard
	2cbdb2a8528e4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           27 seconds ago       Running             storage-provisioner         2                   385f1b53b202e       storage-provisioner                          kube-system
	9919fe4eee7dc       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   4643811be6b5e       kubernetes-dashboard-855c9754f9-trfvl        kubernetes-dashboard
	dc488ebaa6807       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   a5cbbf8b12daa       busybox                                      default
	4b114ce56de2f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   2f616200b15c3       coredns-66bc5c9577-wt4wd                     kube-system
	0de3795567e7d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   40fa6120b28bd       kube-proxy-s87k4                             kube-system
	6868199e0f045       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   ae050e65d4683       kindnet-t9q5g                                kube-system
	c22c014947e9e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   385f1b53b202e       storage-provisioner                          kube-system
	d968383151da8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6816bd5ea7d9f       kube-apiserver-no-preload-027087             kube-system
	5238dbc53ff79       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   531f96e4818b3       kube-controller-manager-no-preload-027087    kube-system
	e261c5b0adde6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   7994e71e175a6       etcd-no-preload-027087                       kube-system
	7fcb9a21d1a31       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   fb697fb3a0920       kube-scheduler-no-preload-027087             kube-system
	
	
	==> coredns [4b114ce56de2ff36fd41657a70702954670fd16b567eaf13b39d0991c0e0a02b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39925 - 10923 "HINFO IN 7437050735640074136.3276881364983829134. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024649489s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-027087
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-027087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=no-preload-027087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_35_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:35:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-027087
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:37:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:37:01 +0000   Sat, 18 Oct 2025 10:35:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:37:01 +0000   Sat, 18 Oct 2025 10:35:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:37:01 +0000   Sat, 18 Oct 2025 10:35:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:37:01 +0000   Sat, 18 Oct 2025 10:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-027087
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                bcb80226-a3a4-43ba-81ed-aa5457f89057
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-wt4wd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-no-preload-027087                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-t9q5g                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-no-preload-027087              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-no-preload-027087     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-s87k4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-no-preload-027087              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vvmt2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-trfvl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 118s                   kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Warning  CgroupV1                 2m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node no-preload-027087 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node no-preload-027087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m16s (x8 over 2m16s)  kubelet          Node no-preload-027087 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m6s                   kubelet          Node no-preload-027087 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m6s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m6s                   kubelet          Node no-preload-027087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s                   kubelet          Node no-preload-027087 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m2s                   node-controller  Node no-preload-027087 event: Registered Node no-preload-027087 in Controller
	  Normal   NodeReady                104s                   kubelet          Node no-preload-027087 status is now: NodeReady
	  Normal   Starting                 69s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)      kubelet          Node no-preload-027087 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)      kubelet          Node no-preload-027087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)      kubelet          Node no-preload-027087 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node no-preload-027087 event: Registered Node no-preload-027087 in Controller
	
	
	==> dmesg <==
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	[Oct18 10:36] overlayfs: idmapped layers are currently not supported
	[ +11.230155] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e261c5b0adde6796a1e7af7d2200022c257e6f59c693f0219b6f283cde6d5b44] <==
	{"level":"warn","ts":"2025-10-18T10:36:27.565752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.614243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.661971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.737459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.765564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.814776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.826175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.852014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.868352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.885753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.909068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.931285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.965623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.999622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.020171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.049494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.071180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.097863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.132798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.162206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.197179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.231530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.253705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.282025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.413027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40798","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:29 up  2:19,  0 user,  load average: 3.83, 4.43, 3.54
	Linux no-preload-027087 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6868199e0f045baf4d0c7a7f0f549c97259e341becc1e091f19130b6f1755866] <==
	I1018 10:36:31.304048       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:36:31.309326       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:36:31.309482       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:36:31.309496       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:36:31.309506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:36:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:36:31.669618       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:36:31.669637       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:36:31.669647       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:36:31.669919       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:37:01.668579       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 10:37:01.670533       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:37:01.670533       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 10:37:01.670637       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 10:37:02.870139       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:37:02.870171       1 metrics.go:72] Registering metrics
	I1018 10:37:02.870546       1 controller.go:711] "Syncing nftables rules"
	I1018 10:37:11.670671       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:37:11.670723       1 main.go:301] handling current node
	I1018 10:37:21.672501       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:37:21.672539       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d968383151da802efc708a7893731beff322978e9b5c6aca61c66a9890a4c2a7] <==
	I1018 10:36:30.269661       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 10:36:30.269707       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:36:30.292905       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 10:36:30.293287       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:36:30.332634       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 10:36:30.339357       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 10:36:30.339390       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:36:30.339499       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 10:36:30.345406       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:36:30.357267       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 10:36:30.357299       1 policy_source.go:240] refreshing policies
	I1018 10:36:30.366450       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:36:30.430810       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1018 10:36:30.454320       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 10:36:30.504815       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:36:30.630921       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:36:32.393911       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:36:32.630352       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:36:32.818905       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:36:32.874656       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:36:33.130271       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.153.86"}
	I1018 10:36:33.195252       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.225.52"}
	I1018 10:36:34.281090       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:36:34.374605       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:36:34.629789       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5238dbc53ff79046c10165b63aa29a7982380bb94f85339a7f129ae1992c4868] <==
	I1018 10:36:34.249799       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 10:36:34.249966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 10:36:34.250044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 10:36:34.250072       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 10:36:34.257446       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 10:36:34.257448       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:36:34.257606       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:36:34.261335       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:36:34.273255       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 10:36:34.273316       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 10:36:34.273351       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 10:36:34.273379       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 10:36:34.273391       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 10:36:34.273397       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 10:36:34.273457       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 10:36:34.273489       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:36:34.273477       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 10:36:34.282218       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 10:36:34.282218       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:36:34.287570       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:36:34.287834       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 10:36:34.288752       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 10:36:34.303376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:36:34.303472       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:36:34.304649       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	
	
	==> kube-proxy [0de3795567e7dc2268ccf4ed71cc0a8ca7702aa8ac6ca751af108c5769adf6aa] <==
	I1018 10:36:32.550975       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:36:32.902951       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:36:33.003369       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:36:33.003421       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:36:33.003518       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:36:33.448333       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:36:33.448391       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:36:33.486751       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:36:33.487079       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:36:33.487103       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:36:33.488313       1 config.go:200] "Starting service config controller"
	I1018 10:36:33.488396       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:36:33.500522       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:36:33.500597       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:36:33.500647       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:36:33.500674       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:36:33.525926       1 config.go:309] "Starting node config controller"
	I1018 10:36:33.577364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:36:33.585219       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:36:33.590496       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 10:36:33.601310       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:36:33.601355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7fcb9a21d1a3177d9033d4cb769bd9b7f55c25b4643124089cf2f78a928074e9] <==
	I1018 10:36:30.241090       1 serving.go:386] Generated self-signed cert in-memory
	I1018 10:36:35.525799       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:36:35.525839       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:36:35.530973       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:36:35.531327       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 10:36:35.531349       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 10:36:35.531373       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:36:35.532262       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:36:35.532287       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:36:35.532307       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:36:35.532329       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:36:35.632769       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:36:35.632826       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:36:35.637390       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 18 10:36:31 no-preload-027087 kubelet[769]: W1018 10:36:31.480326     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/crio-a5cbbf8b12daa87898402fa639805ad2dc0a438a3fa39295961f78b22623c2f6 WatchSource:0}: Error finding container a5cbbf8b12daa87898402fa639805ad2dc0a438a3fa39295961f78b22623c2f6: Status 404 returned error can't find the container with id a5cbbf8b12daa87898402fa639805ad2dc0a438a3fa39295961f78b22623c2f6
	Oct 18 10:36:35 no-preload-027087 kubelet[769]: I1018 10:36:35.756167     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvc9q\" (UniqueName: \"kubernetes.io/projected/4735cd3f-7f8f-4c4f-b3db-8a6544223c4e-kube-api-access-jvc9q\") pod \"kubernetes-dashboard-855c9754f9-trfvl\" (UID: \"4735cd3f-7f8f-4c4f-b3db-8a6544223c4e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-trfvl"
	Oct 18 10:36:35 no-preload-027087 kubelet[769]: I1018 10:36:35.756230     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/885d1c16-9a7e-4c1c-bfff-6ed345623dc1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vvmt2\" (UID: \"885d1c16-9a7e-4c1c-bfff-6ed345623dc1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2"
	Oct 18 10:36:35 no-preload-027087 kubelet[769]: I1018 10:36:35.756257     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4735cd3f-7f8f-4c4f-b3db-8a6544223c4e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-trfvl\" (UID: \"4735cd3f-7f8f-4c4f-b3db-8a6544223c4e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-trfvl"
	Oct 18 10:36:35 no-preload-027087 kubelet[769]: I1018 10:36:35.756283     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdh2g\" (UniqueName: \"kubernetes.io/projected/885d1c16-9a7e-4c1c-bfff-6ed345623dc1-kube-api-access-hdh2g\") pod \"dashboard-metrics-scraper-6ffb444bf9-vvmt2\" (UID: \"885d1c16-9a7e-4c1c-bfff-6ed345623dc1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2"
	Oct 18 10:36:36 no-preload-027087 kubelet[769]: W1018 10:36:36.037771     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/crio-f95c9b99798ada6c0d78df5de706f078e3cb01fa10d1142a4b3d1642eb78d602 WatchSource:0}: Error finding container f95c9b99798ada6c0d78df5de706f078e3cb01fa10d1142a4b3d1642eb78d602: Status 404 returned error can't find the container with id f95c9b99798ada6c0d78df5de706f078e3cb01fa10d1142a4b3d1642eb78d602
	Oct 18 10:36:44 no-preload-027087 kubelet[769]: I1018 10:36:44.078191     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-trfvl" podStartSLOduration=2.361248552 podStartE2EDuration="10.078173287s" podCreationTimestamp="2025-10-18 10:36:34 +0000 UTC" firstStartedPulling="2025-10-18 10:36:36.013098064 +0000 UTC m=+15.742529832" lastFinishedPulling="2025-10-18 10:36:43.730022717 +0000 UTC m=+23.459454567" observedRunningTime="2025-10-18 10:36:44.077754722 +0000 UTC m=+23.807186498" watchObservedRunningTime="2025-10-18 10:36:44.078173287 +0000 UTC m=+23.807605063"
	Oct 18 10:36:50 no-preload-027087 kubelet[769]: I1018 10:36:50.073967     769 scope.go:117] "RemoveContainer" containerID="ef30c02983eb28b7a364891a2e2ec0e59647874986e96b65631a811dd21cdfc3"
	Oct 18 10:36:51 no-preload-027087 kubelet[769]: I1018 10:36:51.079012     769 scope.go:117] "RemoveContainer" containerID="ef30c02983eb28b7a364891a2e2ec0e59647874986e96b65631a811dd21cdfc3"
	Oct 18 10:36:51 no-preload-027087 kubelet[769]: I1018 10:36:51.079631     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:36:51 no-preload-027087 kubelet[769]: E1018 10:36:51.079966     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:36:52 no-preload-027087 kubelet[769]: I1018 10:36:52.083492     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:36:52 no-preload-027087 kubelet[769]: E1018 10:36:52.083689     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:36:55 no-preload-027087 kubelet[769]: I1018 10:36:55.976150     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:36:55 no-preload-027087 kubelet[769]: E1018 10:36:55.976385     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:37:02 no-preload-027087 kubelet[769]: I1018 10:37:02.112333     769 scope.go:117] "RemoveContainer" containerID="c22c014947e9e9dc024d5d72f215ef4605e6ee6ca05a8753ddd66dd51ee9561c"
	Oct 18 10:37:07 no-preload-027087 kubelet[769]: I1018 10:37:07.573661     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:37:08 no-preload-027087 kubelet[769]: I1018 10:37:08.138988     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:37:08 no-preload-027087 kubelet[769]: I1018 10:37:08.141515     769 scope.go:117] "RemoveContainer" containerID="e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7"
	Oct 18 10:37:08 no-preload-027087 kubelet[769]: E1018 10:37:08.143016     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:37:15 no-preload-027087 kubelet[769]: I1018 10:37:15.975898     769 scope.go:117] "RemoveContainer" containerID="e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7"
	Oct 18 10:37:15 no-preload-027087 kubelet[769]: E1018 10:37:15.976068     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:37:27 no-preload-027087 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:37:27 no-preload-027087 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:37:27 no-preload-027087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9919fe4eee7dc51c131498b9e1e50e76edc9753040feea7bff2ec0193354e184] <==
	2025/10/18 10:36:43 Using namespace: kubernetes-dashboard
	2025/10/18 10:36:43 Using in-cluster config to connect to apiserver
	2025/10/18 10:36:43 Using secret token for csrf signing
	2025/10/18 10:36:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 10:36:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 10:36:43 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 10:36:43 Generating JWE encryption key
	2025/10/18 10:36:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 10:36:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 10:36:44 Initializing JWE encryption key from synchronized object
	2025/10/18 10:36:44 Creating in-cluster Sidecar client
	2025/10/18 10:36:44 Serving insecurely on HTTP port: 9090
	2025/10/18 10:36:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:37:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:36:43 Starting overwatch
	
	
	==> storage-provisioner [2cbdb2a8528e4250452cbcfde4d0a6d774dfa919eece0abfe3baf1ff93f2c38d] <==
	I1018 10:37:02.194109       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 10:37:02.194165       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 10:37:02.197553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:05.659989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:09.920138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:13.518756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:16.571662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:19.593689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:19.598687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:37:19.598845       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:37:19.599005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-027087_3034f84f-0d58-40f5-901d-adb53244db78!
	I1018 10:37:19.601031       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2f18fe6-030e-454a-877d-bce5a2ea2a3e", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-027087_3034f84f-0d58-40f5-901d-adb53244db78 became leader
	W1018 10:37:19.602488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:19.608380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:37:19.700628       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-027087_3034f84f-0d58-40f5-901d-adb53244db78!
	W1018 10:37:21.611813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:21.618908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:23.622276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:23.626939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:25.631256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:25.638369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:27.641793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:27.647654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:29.659983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:29.666266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c22c014947e9e9dc024d5d72f215ef4605e6ee6ca05a8753ddd66dd51ee9561c] <==
	I1018 10:36:31.610412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 10:37:01.616833       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-027087 -n no-preload-027087
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-027087 -n no-preload-027087: exit status 2 (374.775293ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-027087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-027087
helpers_test.go:243: (dbg) docker inspect no-preload-027087:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75",
	        "Created": "2025-10-18T10:34:32.909990218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T10:36:13.029786514Z",
	            "FinishedAt": "2025-10-18T10:36:11.262531109Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/hostname",
	        "HostsPath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/hosts",
	        "LogPath": "/var/lib/docker/containers/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75-json.log",
	        "Name": "/no-preload-027087",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-027087:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-027087",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75",
	                "LowerDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e-init/diff:/var/lib/docker/overlay2/041484bdb0cce0c3101a575bf80b0a791602474c1cc52d8f6ad16241dd6bdddf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc44eaa4dc21510f8bf74df6fee94b5b27213db1c9918e5fa7933fdeabf5674e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-027087",
	                "Source": "/var/lib/docker/volumes/no-preload-027087/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-027087",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-027087",
	                "name.minikube.sigs.k8s.io": "no-preload-027087",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7348f34b1bfc071dee859e95d1c9cda99310e2d60e64e7203a706f5df8aac4b3",
	            "SandboxKey": "/var/run/docker/netns/7348f34b1bfc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-027087": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:80:31:e8:b5:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a54e6a9010d18200cc9cc9a9c81fbb30eaec85d99c1ec1614afefa1f14d2cb",
	                    "EndpointID": "64f7f96362fc70aa6b8f6ffbf3feed7693751d5cc1a79c52cd5ec47b23cd9478",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-027087",
	                        "f282a9c13400"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027087 -n no-preload-027087
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027087 -n no-preload-027087: exit status 2 (424.46902ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-027087 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-027087 logs -n 25: (1.371228634s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-715182 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-715182                                                                                                                                                                                                               │ default-k8s-diff-port-715182 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-922359                                                                                                                                                                                                               │ disable-driver-mounts-922359 │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ embed-certs-101897 image list --format=json                                                                                                                                                                                                   │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ pause   │ -p embed-certs-101897 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │                     │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ delete  │ -p embed-certs-101897                                                                                                                                                                                                                         │ embed-certs-101897           │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:34 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:34 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-577403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ stop    │ -p newest-cni-577403 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-577403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ image   │ newest-cni-577403 image list --format=json                                                                                                                                                                                                    │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:35 UTC │
	│ pause   │ -p newest-cni-577403 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-027087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-027087 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:35 UTC │ 18 Oct 25 10:36 UTC │
	│ delete  │ -p newest-cni-577403                                                                                                                                                                                                                          │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │ 18 Oct 25 10:36 UTC │
	│ delete  │ -p newest-cni-577403                                                                                                                                                                                                                          │ newest-cni-577403            │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │ 18 Oct 25 10:36 UTC │
	│ start   │ -p auto-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-881658                  │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-027087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │ 18 Oct 25 10:36 UTC │
	│ start   │ -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:36 UTC │ 18 Oct 25 10:37 UTC │
	│ image   │ no-preload-027087 image list --format=json                                                                                                                                                                                                    │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:37 UTC │ 18 Oct 25 10:37 UTC │
	│ pause   │ -p no-preload-027087 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-027087            │ jenkins │ v1.37.0 │ 18 Oct 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 10:36:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 10:36:12.513246  500152 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:36:12.513414  500152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:36:12.513436  500152 out.go:374] Setting ErrFile to fd 2...
	I1018 10:36:12.513456  500152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:36:12.513722  500152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:36:12.514100  500152 out.go:368] Setting JSON to false
	I1018 10:36:12.514974  500152 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8323,"bootTime":1760775450,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:36:12.516581  500152 start.go:141] virtualization:  
	I1018 10:36:12.520623  500152 out.go:179] * [no-preload-027087] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:36:12.523578  500152 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:36:12.523666  500152 notify.go:220] Checking for updates...
	I1018 10:36:12.529352  500152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:36:12.532695  500152 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:36:12.536034  500152 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:36:12.538082  500152 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:36:12.540966  500152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:36:12.544250  500152 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:12.544795  500152 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:36:12.619250  500152 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:36:12.619382  500152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:36:12.758925  500152 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-18 10:36:12.749588162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:36:12.759064  500152 docker.go:318] overlay module found
	I1018 10:36:12.762318  500152 out.go:179] * Using the docker driver based on existing profile
	I1018 10:36:12.765208  500152 start.go:305] selected driver: docker
	I1018 10:36:12.765225  500152 start.go:925] validating driver "docker" against &{Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:36:12.765369  500152 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:36:12.766017  500152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:36:12.913520  500152 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-18 10:36:12.89861497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:36:12.913850  500152 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:36:12.913885  500152 cni.go:84] Creating CNI manager for ""
	I1018 10:36:12.913949  500152 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:36:12.913992  500152 start.go:349] cluster config:
	{Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:36:12.917403  500152 out.go:179] * Starting "no-preload-027087" primary control-plane node in "no-preload-027087" cluster
	I1018 10:36:12.921285  500152 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 10:36:12.924392  500152 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 10:36:12.927216  500152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:36:12.927375  500152 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/config.json ...
	I1018 10:36:12.927698  500152 cache.go:107] acquiring lock: {Name:mkaf3d4648d07ea61f5c43b4ac6cff6e96e07d0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.927776  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 10:36:12.927788  500152 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 97.24µs
	I1018 10:36:12.927801  500152 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 10:36:12.927817  500152 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 10:36:12.928077  500152 cache.go:107] acquiring lock: {Name:mkce90ae98faaf046844c77feccd02a8c89b22bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928148  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 10:36:12.928157  500152 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 86.065µs
	I1018 10:36:12.928164  500152 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 10:36:12.928175  500152 cache.go:107] acquiring lock: {Name:mkaa713f6c6c749f7890994ea47ccb489ab7b76a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928205  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 10:36:12.928210  500152 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.881µs
	I1018 10:36:12.928216  500152 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 10:36:12.928228  500152 cache.go:107] acquiring lock: {Name:mkbf154924b5d05f1add0f80d2d8992cab46ca22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928262  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 10:36:12.928267  500152 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 43.341µs
	I1018 10:36:12.928273  500152 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 10:36:12.928288  500152 cache.go:107] acquiring lock: {Name:mk7c500c022aee187177cdcb3e6cd138895cc689 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928316  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 10:36:12.928321  500152 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.222µs
	I1018 10:36:12.928327  500152 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 10:36:12.928337  500152 cache.go:107] acquiring lock: {Name:mk8d87cb313c81485b1cabba19862a22e85903db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928362  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 10:36:12.928367  500152 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.106µs
	I1018 10:36:12.928373  500152 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 10:36:12.928381  500152 cache.go:107] acquiring lock: {Name:mkf60d23fd6f24668b2e7aa1b277366e0a8c4f15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928406  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 10:36:12.928429  500152 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 48.624µs
	I1018 10:36:12.928436  500152 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 10:36:12.928446  500152 cache.go:107] acquiring lock: {Name:mk79330e484fcb6a5af61229914c16bea91c5633 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.928473  500152 cache.go:115] /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 10:36:12.928478  500152 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 33.322µs
	I1018 10:36:12.928483  500152 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 10:36:12.928490  500152 cache.go:87] Successfully saved all images to host disk.
	I1018 10:36:12.957402  500152 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 10:36:12.957421  500152 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 10:36:12.957439  500152 cache.go:232] Successfully downloaded all kic artifacts
	I1018 10:36:12.957461  500152 start.go:360] acquireMachinesLock for no-preload-027087: {Name:mk3407a2c92d7e64b372433da7fc52893eca365e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 10:36:12.957512  500152 start.go:364] duration metric: took 36.448µs to acquireMachinesLock for "no-preload-027087"
	I1018 10:36:12.957539  500152 start.go:96] Skipping create...Using existing machine configuration
	I1018 10:36:12.957545  500152 fix.go:54] fixHost starting: 
	I1018 10:36:12.957805  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:12.987068  500152 fix.go:112] recreateIfNeeded on no-preload-027087: state=Stopped err=<nil>
	W1018 10:36:12.987112  500152 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 10:36:10.998643  499205 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-881658:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.417064984s)
	I1018 10:36:10.998676  499205 kic.go:203] duration metric: took 4.417219176s to extract preloaded images to volume ...
	W1018 10:36:10.998812  499205 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 10:36:10.998916  499205 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 10:36:11.054675  499205 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-881658 --name auto-881658 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-881658 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-881658 --network auto-881658 --ip 192.168.85.2 --volume auto-881658:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 10:36:11.430992  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Running}}
	I1018 10:36:11.458648  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:11.491991  499205 cli_runner.go:164] Run: docker exec auto-881658 stat /var/lib/dpkg/alternatives/iptables
	I1018 10:36:11.563398  499205 oci.go:144] the created container "auto-881658" has a running status.
	I1018 10:36:11.563437  499205 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa...
	I1018 10:36:13.621436  499205 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 10:36:13.654848  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:13.680443  499205 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 10:36:13.680464  499205 kic_runner.go:114] Args: [docker exec --privileged auto-881658 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 10:36:13.725903  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:13.748663  499205 machine.go:93] provisionDockerMachine start ...
	I1018 10:36:13.748766  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:13.781757  499205 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:13.782084  499205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1018 10:36:13.782093  499205 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:36:13.949147  499205 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-881658
	
	I1018 10:36:13.949237  499205 ubuntu.go:182] provisioning hostname "auto-881658"
	I1018 10:36:13.949354  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:13.967284  499205 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:13.967594  499205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1018 10:36:13.967605  499205 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-881658 && echo "auto-881658" | sudo tee /etc/hostname
	I1018 10:36:14.127078  499205 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-881658
	
	I1018 10:36:14.127175  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:14.144386  499205 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:14.144699  499205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1018 10:36:14.144716  499205 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-881658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-881658/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-881658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:36:14.289382  499205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:36:14.289412  499205 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:36:14.289436  499205 ubuntu.go:190] setting up certificates
	I1018 10:36:14.289447  499205 provision.go:84] configureAuth start
	I1018 10:36:14.289511  499205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-881658
	I1018 10:36:14.306221  499205 provision.go:143] copyHostCerts
	I1018 10:36:14.306286  499205 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:36:14.306299  499205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:36:14.306380  499205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:36:14.306484  499205 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:36:14.306495  499205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:36:14.306522  499205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:36:14.306757  499205 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:36:14.306770  499205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:36:14.306801  499205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:36:14.306867  499205 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.auto-881658 san=[127.0.0.1 192.168.85.2 auto-881658 localhost minikube]
	I1018 10:36:14.368446  499205 provision.go:177] copyRemoteCerts
	I1018 10:36:14.368521  499205 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:36:14.368563  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:14.392355  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:14.497028  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 10:36:14.514630  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 10:36:14.533328  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:36:14.554203  499205 provision.go:87] duration metric: took 264.739955ms to configureAuth
	I1018 10:36:14.554232  499205 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:36:14.554421  499205 config.go:182] Loaded profile config "auto-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:14.554531  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:14.571498  499205 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:14.571817  499205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I1018 10:36:14.571838  499205 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:36:14.854453  499205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:36:14.854474  499205 machine.go:96] duration metric: took 1.105791654s to provisionDockerMachine
	I1018 10:36:14.854484  499205 client.go:171] duration metric: took 8.955276861s to LocalClient.Create
	I1018 10:36:14.854497  499205 start.go:167] duration metric: took 8.955355836s to libmachine.API.Create "auto-881658"
	I1018 10:36:14.854503  499205 start.go:293] postStartSetup for "auto-881658" (driver="docker")
	I1018 10:36:14.854517  499205 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:36:14.854615  499205 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:36:14.854655  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:14.882879  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:14.998269  499205 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:36:15.001855  499205 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:36:15.001885  499205 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:36:15.001896  499205 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:36:15.001954  499205 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:36:15.002042  499205 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:36:15.002143  499205 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:36:15.012845  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:36:15.040344  499205 start.go:296] duration metric: took 185.825601ms for postStartSetup
	I1018 10:36:15.040794  499205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-881658
	I1018 10:36:15.066475  499205 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/config.json ...
	I1018 10:36:15.066788  499205 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:36:15.066848  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:15.085616  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:15.190999  499205 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:36:15.196349  499205 start.go:128] duration metric: took 9.300879285s to createHost
	I1018 10:36:15.196415  499205 start.go:83] releasing machines lock for "auto-881658", held for 9.301053736s
	I1018 10:36:15.196492  499205 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-881658
	I1018 10:36:15.214254  499205 ssh_runner.go:195] Run: cat /version.json
	I1018 10:36:15.214307  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:15.214378  499205 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:36:15.214451  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:15.240906  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:15.242768  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:15.441280  499205 ssh_runner.go:195] Run: systemctl --version
	I1018 10:36:15.448051  499205 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:36:15.486311  499205 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:36:15.490630  499205 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:36:15.490709  499205 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:36:15.520479  499205 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 10:36:15.520501  499205 start.go:495] detecting cgroup driver to use...
	I1018 10:36:15.520535  499205 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:36:15.520587  499205 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:36:15.539233  499205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:36:15.552493  499205 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:36:15.552560  499205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:36:15.570392  499205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:36:15.589687  499205 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:36:15.710179  499205 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:36:15.838031  499205 docker.go:234] disabling docker service ...
	I1018 10:36:15.838120  499205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:36:15.860504  499205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:36:15.874170  499205 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:36:15.993544  499205 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:36:16.115533  499205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:36:16.129630  499205 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:36:16.144953  499205 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:36:16.145056  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.154614  499205 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:36:16.154732  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.163985  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.173047  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.181560  499205 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:36:16.189988  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.198593  499205 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.212255  499205 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:16.221515  499205 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:36:16.230495  499205 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:36:16.238001  499205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:16.354408  499205 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:36:16.471884  499205 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:36:16.471951  499205 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:36:16.475904  499205 start.go:563] Will wait 60s for crictl version
	I1018 10:36:16.475967  499205 ssh_runner.go:195] Run: which crictl
	I1018 10:36:16.480088  499205 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:36:16.514558  499205 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:36:16.514648  499205 ssh_runner.go:195] Run: crio --version
	I1018 10:36:16.556698  499205 ssh_runner.go:195] Run: crio --version
	I1018 10:36:16.595952  499205 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:36:12.990546  500152 out.go:252] * Restarting existing docker container for "no-preload-027087" ...
	I1018 10:36:12.990652  500152 cli_runner.go:164] Run: docker start no-preload-027087
	I1018 10:36:13.323768  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:13.380633  500152 kic.go:430] container "no-preload-027087" state is running.
	I1018 10:36:13.381045  500152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-027087
	I1018 10:36:13.431920  500152 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/config.json ...
	I1018 10:36:13.432153  500152 machine.go:93] provisionDockerMachine start ...
	I1018 10:36:13.432217  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:13.487902  500152 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:13.488489  500152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1018 10:36:13.488507  500152 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 10:36:13.489237  500152 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 10:36:16.640901  500152 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-027087
	
	I1018 10:36:16.640931  500152 ubuntu.go:182] provisioning hostname "no-preload-027087"
	I1018 10:36:16.640999  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:16.662683  500152 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:16.662985  500152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1018 10:36:16.662997  500152 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-027087 && echo "no-preload-027087" | sudo tee /etc/hostname
	I1018 10:36:16.835669  500152 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-027087
	
	I1018 10:36:16.835744  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:16.862526  500152 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:16.862839  500152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1018 10:36:16.862861  500152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-027087' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-027087/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-027087' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 10:36:17.037390  500152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 10:36:17.037476  500152 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-293333/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-293333/.minikube}
	I1018 10:36:17.037539  500152 ubuntu.go:190] setting up certificates
	I1018 10:36:17.037569  500152 provision.go:84] configureAuth start
	I1018 10:36:17.037657  500152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-027087
	I1018 10:36:17.081444  500152 provision.go:143] copyHostCerts
	I1018 10:36:17.081516  500152 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem, removing ...
	I1018 10:36:17.081533  500152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem
	I1018 10:36:17.081619  500152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/ca.pem (1078 bytes)
	I1018 10:36:17.081718  500152 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem, removing ...
	I1018 10:36:17.081724  500152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem
	I1018 10:36:17.081749  500152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/cert.pem (1123 bytes)
	I1018 10:36:17.081807  500152 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem, removing ...
	I1018 10:36:17.081812  500152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem
	I1018 10:36:17.081834  500152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-293333/.minikube/key.pem (1675 bytes)
	I1018 10:36:17.081887  500152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem org=jenkins.no-preload-027087 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-027087]
	I1018 10:36:17.222922  500152 provision.go:177] copyRemoteCerts
	I1018 10:36:17.222994  500152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 10:36:17.223041  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:17.250752  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:17.357810  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 10:36:17.378671  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 10:36:17.399534  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 10:36:17.420028  500152 provision.go:87] duration metric: took 382.408017ms to configureAuth
	I1018 10:36:17.420055  500152 ubuntu.go:206] setting minikube options for container-runtime
	I1018 10:36:17.420240  500152 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:17.420345  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:17.445017  500152 main.go:141] libmachine: Using SSH client type: native
	I1018 10:36:17.445385  500152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1018 10:36:17.445401  500152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 10:36:17.813337  500152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 10:36:17.813360  500152 machine.go:96] duration metric: took 4.381198152s to provisionDockerMachine
	I1018 10:36:17.813370  500152 start.go:293] postStartSetup for "no-preload-027087" (driver="docker")
	I1018 10:36:17.813388  500152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 10:36:17.813450  500152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 10:36:17.813501  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:17.839821  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:17.951800  500152 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 10:36:17.966626  500152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 10:36:17.966669  500152 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 10:36:17.966682  500152 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/addons for local assets ...
	I1018 10:36:17.966737  500152 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-293333/.minikube/files for local assets ...
	I1018 10:36:17.966823  500152 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem -> 2951932.pem in /etc/ssl/certs
	I1018 10:36:17.966924  500152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 10:36:17.983427  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:36:18.003814  500152 start.go:296] duration metric: took 190.428486ms for postStartSetup
	I1018 10:36:18.003907  500152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:36:18.003951  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:18.024321  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:18.131375  500152 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 10:36:18.137524  500152 fix.go:56] duration metric: took 5.179967219s for fixHost
	I1018 10:36:18.137554  500152 start.go:83] releasing machines lock for "no-preload-027087", held for 5.180033575s
	I1018 10:36:18.137634  500152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-027087
	I1018 10:36:18.161547  500152 ssh_runner.go:195] Run: cat /version.json
	I1018 10:36:18.161596  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:18.161904  500152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 10:36:18.161957  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:18.189114  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:18.213696  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:18.298125  500152 ssh_runner.go:195] Run: systemctl --version
	I1018 10:36:18.398935  500152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 10:36:18.447456  500152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 10:36:18.452376  500152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 10:36:18.452444  500152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 10:36:18.461247  500152 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 10:36:18.461266  500152 start.go:495] detecting cgroup driver to use...
	I1018 10:36:18.461298  500152 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 10:36:18.461343  500152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 10:36:18.477966  500152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 10:36:18.492821  500152 docker.go:218] disabling cri-docker service (if available) ...
	I1018 10:36:18.492887  500152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 10:36:18.509786  500152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 10:36:18.524941  500152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 10:36:18.693699  500152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 10:36:18.871643  500152 docker.go:234] disabling docker service ...
	I1018 10:36:18.871719  500152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 10:36:18.889151  500152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 10:36:18.903683  500152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 10:36:19.056017  500152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 10:36:19.215315  500152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 10:36:19.233408  500152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 10:36:19.250114  500152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 10:36:19.250176  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.259922  500152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 10:36:19.259996  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.271936  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.287905  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.299843  500152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 10:36:19.308849  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.319593  500152 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.328579  500152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 10:36:19.337966  500152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 10:36:19.346409  500152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 10:36:19.354624  500152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:19.535684  500152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 10:36:19.692699  500152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 10:36:19.692763  500152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 10:36:19.697513  500152 start.go:563] Will wait 60s for crictl version
	I1018 10:36:19.697633  500152 ssh_runner.go:195] Run: which crictl
	I1018 10:36:19.701351  500152 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 10:36:19.736875  500152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 10:36:19.737014  500152 ssh_runner.go:195] Run: crio --version
	I1018 10:36:19.783523  500152 ssh_runner.go:195] Run: crio --version
	I1018 10:36:19.840854  500152 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 10:36:16.598601  499205 cli_runner.go:164] Run: docker network inspect auto-881658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:36:16.614915  499205 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 10:36:16.618866  499205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:36:16.629524  499205 kubeadm.go:883] updating cluster {Name:auto-881658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-881658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:36:16.629649  499205 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:36:16.629715  499205 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:36:16.680595  499205 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:36:16.680616  499205 crio.go:433] Images already preloaded, skipping extraction
	I1018 10:36:16.680672  499205 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:36:16.708696  499205 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:36:16.708715  499205 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:36:16.708722  499205 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 10:36:16.708812  499205 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-881658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-881658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:36:16.708894  499205 ssh_runner.go:195] Run: crio config
	I1018 10:36:16.804428  499205 cni.go:84] Creating CNI manager for ""
	I1018 10:36:16.804493  499205 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:36:16.804525  499205 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:36:16.804580  499205 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-881658 NodeName:auto-881658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:36:16.804737  499205 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-881658"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:36:16.804823  499205 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:36:16.813149  499205 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:36:16.813276  499205 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:36:16.821873  499205 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1018 10:36:16.842468  499205 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:36:16.857664  499205 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1018 10:36:16.878543  499205 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:36:16.883000  499205 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:36:16.894012  499205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:17.031944  499205 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:36:17.054932  499205 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658 for IP: 192.168.85.2
	I1018 10:36:17.054950  499205 certs.go:195] generating shared ca certs ...
	I1018 10:36:17.054967  499205 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:17.055107  499205 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:36:17.055147  499205 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:36:17.055154  499205 certs.go:257] generating profile certs ...
	I1018 10:36:17.055207  499205 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.key
	I1018 10:36:17.055218  499205 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt with IP's: []
	I1018 10:36:17.316717  499205 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt ...
	I1018 10:36:17.316801  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: {Name:mk2a0e2efcca901b177388430f644cc2a3c5a78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:17.317100  499205 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.key ...
	I1018 10:36:17.317149  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.key: {Name:mk4edfdbb2cf2fe230e74bbb751a4acd670dd51c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:17.317336  499205 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key.65aa3f66
	I1018 10:36:17.317388  499205 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt.65aa3f66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 10:36:18.801774  499205 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt.65aa3f66 ...
	I1018 10:36:18.801808  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt.65aa3f66: {Name:mkd38a34570225be4b64c5f2be447acccfbd44e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:18.801994  499205 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key.65aa3f66 ...
	I1018 10:36:18.802011  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key.65aa3f66: {Name:mkc28f42b1bf8858fb9964bdbb8e42e82affed28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:18.802094  499205 certs.go:382] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt.65aa3f66 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt
	I1018 10:36:18.802195  499205 certs.go:386] copying /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key.65aa3f66 -> /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key
	I1018 10:36:18.802258  499205 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.key
	I1018 10:36:18.802277  499205 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.crt with IP's: []
	I1018 10:36:18.926807  499205 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.crt ...
	I1018 10:36:18.926838  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.crt: {Name:mk3c01d16ed59ea21230b79d5cc98161fde9be21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:18.927064  499205 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.key ...
	I1018 10:36:18.927083  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.key: {Name:mkf68824996920ea57e33eb17f89bcff14154bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:18.927289  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:36:18.927332  499205 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:36:18.927348  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:36:18.927375  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:36:18.927407  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:36:18.927464  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:36:18.927516  499205 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:36:18.928170  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:36:18.956571  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:36:18.984576  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:36:19.007061  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:36:19.033887  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 10:36:19.061049  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 10:36:19.081608  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:36:19.103275  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 10:36:19.136688  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:36:19.159511  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:36:19.179437  499205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:36:19.204935  499205 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:36:19.222914  499205 ssh_runner.go:195] Run: openssl version
	I1018 10:36:19.231477  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:36:19.241016  499205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:36:19.245457  499205 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:36:19.245521  499205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:36:19.288652  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:36:19.297705  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:36:19.306907  499205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:36:19.311875  499205 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:36:19.311942  499205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:36:19.355475  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:36:19.364864  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:36:19.374268  499205 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:19.377945  499205 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:19.378053  499205 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:19.438423  499205 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:36:19.457981  499205 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:36:19.462655  499205 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 10:36:19.462752  499205 kubeadm.go:400] StartCluster: {Name:auto-881658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-881658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:36:19.462884  499205 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:36:19.462978  499205 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:36:19.533512  499205 cri.go:89] found id: ""
	I1018 10:36:19.533631  499205 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:36:19.545135  499205 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 10:36:19.557621  499205 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 10:36:19.557743  499205 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 10:36:19.567322  499205 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 10:36:19.567393  499205 kubeadm.go:157] found existing configuration files:
	
	I1018 10:36:19.567474  499205 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 10:36:19.576098  499205 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 10:36:19.576238  499205 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 10:36:19.584212  499205 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 10:36:19.596572  499205 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 10:36:19.596687  499205 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 10:36:19.604614  499205 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 10:36:19.613423  499205 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 10:36:19.613548  499205 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 10:36:19.621438  499205 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 10:36:19.631884  499205 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 10:36:19.632005  499205 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 10:36:19.639802  499205 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 10:36:19.691444  499205 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 10:36:19.694923  499205 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 10:36:19.744457  499205 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 10:36:19.744531  499205 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 10:36:19.744568  499205 kubeadm.go:318] OS: Linux
	I1018 10:36:19.744615  499205 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 10:36:19.744666  499205 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 10:36:19.744715  499205 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 10:36:19.744766  499205 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 10:36:19.744818  499205 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 10:36:19.744868  499205 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 10:36:19.744915  499205 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 10:36:19.744966  499205 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 10:36:19.745014  499205 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 10:36:19.843030  499205 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 10:36:19.843145  499205 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 10:36:19.843240  499205 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 10:36:19.856397  499205 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 10:36:19.861512  499205 out.go:252]   - Generating certificates and keys ...
	I1018 10:36:19.861612  499205 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 10:36:19.861688  499205 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 10:36:20.221962  499205 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 10:36:20.581559  499205 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 10:36:19.843744  500152 cli_runner.go:164] Run: docker network inspect no-preload-027087 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 10:36:19.881244  500152 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 10:36:19.885898  500152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:36:19.899341  500152 kubeadm.go:883] updating cluster {Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 10:36:19.899462  500152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 10:36:19.899507  500152 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 10:36:19.933061  500152 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 10:36:19.933081  500152 cache_images.go:85] Images are preloaded, skipping loading
	I1018 10:36:19.933088  500152 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 10:36:19.933197  500152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-027087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 10:36:19.933271  500152 ssh_runner.go:195] Run: crio config
	I1018 10:36:20.001503  500152 cni.go:84] Creating CNI manager for ""
	I1018 10:36:20.001578  500152 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:36:20.001613  500152 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 10:36:20.001662  500152 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-027087 NodeName:no-preload-027087 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 10:36:20.001861  500152 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-027087"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 10:36:20.001977  500152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 10:36:20.011252  500152 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 10:36:20.011451  500152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 10:36:20.022705  500152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 10:36:20.042187  500152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 10:36:20.058850  500152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 10:36:20.072736  500152 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 10:36:20.077102  500152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 10:36:20.087936  500152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:20.246619  500152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:36:20.264026  500152 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087 for IP: 192.168.76.2
	I1018 10:36:20.264045  500152 certs.go:195] generating shared ca certs ...
	I1018 10:36:20.264060  500152 certs.go:227] acquiring lock for ca certs: {Name:mk5ac0fe57b76b41d515b720931dd179700132a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:20.264200  500152 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key
	I1018 10:36:20.264238  500152 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key
	I1018 10:36:20.264245  500152 certs.go:257] generating profile certs ...
	I1018 10:36:20.264330  500152 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.key
	I1018 10:36:20.264409  500152 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key.1343fb15
	I1018 10:36:20.264447  500152 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.key
	I1018 10:36:20.264568  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem (1338 bytes)
	W1018 10:36:20.264596  500152 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193_empty.pem, impossibly tiny 0 bytes
	I1018 10:36:20.264604  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 10:36:20.264626  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/ca.pem (1078 bytes)
	I1018 10:36:20.264646  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/cert.pem (1123 bytes)
	I1018 10:36:20.264674  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/certs/key.pem (1675 bytes)
	I1018 10:36:20.264719  500152 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem (1708 bytes)
	I1018 10:36:20.265413  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 10:36:20.315597  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 10:36:20.338998  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 10:36:20.369643  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 10:36:20.422480  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 10:36:20.486631  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 10:36:20.549203  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 10:36:20.571908  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 10:36:20.602909  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/certs/295193.pem --> /usr/share/ca-certificates/295193.pem (1338 bytes)
	I1018 10:36:20.628638  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/ssl/certs/2951932.pem --> /usr/share/ca-certificates/2951932.pem (1708 bytes)
	I1018 10:36:20.652428  500152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 10:36:20.671976  500152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 10:36:20.693166  500152 ssh_runner.go:195] Run: openssl version
	I1018 10:36:20.699853  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295193.pem && ln -fs /usr/share/ca-certificates/295193.pem /etc/ssl/certs/295193.pem"
	I1018 10:36:20.709034  500152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295193.pem
	I1018 10:36:20.713138  500152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:38 /usr/share/ca-certificates/295193.pem
	I1018 10:36:20.713213  500152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295193.pem
	I1018 10:36:20.756816  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295193.pem /etc/ssl/certs/51391683.0"
	I1018 10:36:20.765681  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951932.pem && ln -fs /usr/share/ca-certificates/2951932.pem /etc/ssl/certs/2951932.pem"
	I1018 10:36:20.774695  500152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951932.pem
	I1018 10:36:20.779079  500152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:38 /usr/share/ca-certificates/2951932.pem
	I1018 10:36:20.779142  500152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951932.pem
	I1018 10:36:20.820608  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951932.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 10:36:20.829099  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 10:36:20.839416  500152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:20.845421  500152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 09:31 /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:20.845482  500152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 10:36:20.902444  500152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 10:36:20.912799  500152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 10:36:20.917453  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 10:36:20.960454  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 10:36:21.042936  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 10:36:21.145480  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 10:36:21.292567  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 10:36:21.371121  500152 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 10:36:21.463336  500152 kubeadm.go:400] StartCluster: {Name:no-preload-027087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-027087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 10:36:21.463426  500152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 10:36:21.463499  500152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 10:36:21.516278  500152 cri.go:89] found id: "d968383151da802efc708a7893731beff322978e9b5c6aca61c66a9890a4c2a7"
	I1018 10:36:21.516303  500152 cri.go:89] found id: "5238dbc53ff79046c10165b63aa29a7982380bb94f85339a7f129ae1992c4868"
	I1018 10:36:21.516318  500152 cri.go:89] found id: "e261c5b0adde6796a1e7af7d2200022c257e6f59c693f0219b6f283cde6d5b44"
	I1018 10:36:21.516322  500152 cri.go:89] found id: "7fcb9a21d1a3177d9033d4cb769bd9b7f55c25b4643124089cf2f78a928074e9"
	I1018 10:36:21.516325  500152 cri.go:89] found id: ""
	I1018 10:36:21.516376  500152 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 10:36:21.545505  500152 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T10:36:21Z" level=error msg="open /run/runc: no such file or directory"
	I1018 10:36:21.545591  500152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 10:36:21.560421  500152 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 10:36:21.560441  500152 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 10:36:21.560494  500152 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 10:36:21.572706  500152 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 10:36:21.573154  500152 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-027087" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:36:21.573291  500152 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-293333/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-027087" cluster setting kubeconfig missing "no-preload-027087" context setting]
	I1018 10:36:21.573610  500152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:21.574883  500152 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 10:36:21.600607  500152 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 10:36:21.600642  500152 kubeadm.go:601] duration metric: took 40.194519ms to restartPrimaryControlPlane
	I1018 10:36:21.600651  500152 kubeadm.go:402] duration metric: took 137.325537ms to StartCluster
	I1018 10:36:21.600666  500152 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:21.600730  500152 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:36:21.601418  500152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:21.601642  500152 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:36:21.602029  500152 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:36:21.602111  500152 addons.go:69] Setting storage-provisioner=true in profile "no-preload-027087"
	I1018 10:36:21.602125  500152 addons.go:238] Setting addon storage-provisioner=true in "no-preload-027087"
	W1018 10:36:21.602130  500152 addons.go:247] addon storage-provisioner should already be in state true
	I1018 10:36:21.602156  500152 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:36:21.602581  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:21.602924  500152 config.go:182] Loaded profile config "no-preload-027087": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:21.602988  500152 addons.go:69] Setting dashboard=true in profile "no-preload-027087"
	I1018 10:36:21.602997  500152 addons.go:238] Setting addon dashboard=true in "no-preload-027087"
	W1018 10:36:21.603004  500152 addons.go:247] addon dashboard should already be in state true
	I1018 10:36:21.603026  500152 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:36:21.603420  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:21.607257  500152 addons.go:69] Setting default-storageclass=true in profile "no-preload-027087"
	I1018 10:36:21.607462  500152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-027087"
	I1018 10:36:21.607815  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:21.608025  500152 out.go:179] * Verifying Kubernetes components...
	I1018 10:36:21.611900  500152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:21.652909  500152 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:36:21.656144  500152 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:36:21.656164  500152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:36:21.656224  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:21.663592  500152 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 10:36:21.669077  500152 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 10:36:21.671482  500152 addons.go:238] Setting addon default-storageclass=true in "no-preload-027087"
	W1018 10:36:21.671503  500152 addons.go:247] addon default-storageclass should already be in state true
	I1018 10:36:21.671527  500152 host.go:66] Checking if "no-preload-027087" exists ...
	I1018 10:36:21.671957  500152 cli_runner.go:164] Run: docker container inspect no-preload-027087 --format={{.State.Status}}
	I1018 10:36:21.674152  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 10:36:21.674187  500152 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 10:36:21.674258  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:21.714602  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:21.721372  500152 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:36:21.721396  500152 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:36:21.721458  500152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-027087
	I1018 10:36:21.723156  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:21.756835  500152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/no-preload-027087/id_rsa Username:docker}
	I1018 10:36:22.089029  500152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:36:22.102299  500152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:36:22.230507  500152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:36:22.250831  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 10:36:22.250905  500152 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 10:36:22.332071  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 10:36:22.332146  500152 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 10:36:22.463788  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 10:36:22.463876  500152 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 10:36:22.489086  499205 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 10:36:22.872596  499205 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 10:36:23.605550  499205 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 10:36:23.605687  499205 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-881658 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:36:24.300304  499205 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 10:36:24.302462  499205 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-881658 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 10:36:24.739941  499205 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 10:36:24.912011  499205 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 10:36:25.561698  499205 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 10:36:25.562044  499205 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 10:36:22.566649  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 10:36:22.566719  500152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 10:36:22.622447  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 10:36:22.622524  500152 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 10:36:22.667085  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 10:36:22.667162  500152 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 10:36:22.693280  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 10:36:22.693353  500152 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 10:36:22.731688  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 10:36:22.731760  500152 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 10:36:22.783712  500152 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:36:22.783785  500152 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 10:36:22.822191  500152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 10:36:25.933651  499205 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 10:36:27.206792  499205 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 10:36:27.613568  499205 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 10:36:28.517656  499205 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 10:36:29.816956  499205 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 10:36:29.817823  499205 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 10:36:29.820721  499205 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 10:36:29.824447  499205 out.go:252]   - Booting up control plane ...
	I1018 10:36:29.824554  499205 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 10:36:29.824636  499205 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 10:36:29.825649  499205 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 10:36:29.848733  499205 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 10:36:29.848847  499205 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 10:36:29.862958  499205 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 10:36:29.863064  499205 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 10:36:29.863106  499205 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 10:36:30.077052  499205 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 10:36:30.077201  499205 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 10:36:32.663979  500152 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.574865424s)
	I1018 10:36:32.664037  500152 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.561667021s)
	I1018 10:36:32.664068  500152 node_ready.go:35] waiting up to 6m0s for node "no-preload-027087" to be "Ready" ...
	I1018 10:36:32.664388  500152 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.433807112s)
	I1018 10:36:32.712056  500152 node_ready.go:49] node "no-preload-027087" is "Ready"
	I1018 10:36:32.712083  500152 node_ready.go:38] duration metric: took 47.994237ms for node "no-preload-027087" to be "Ready" ...
	I1018 10:36:32.712096  500152 api_server.go:52] waiting for apiserver process to appear ...
	I1018 10:36:32.712158  500152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:36:33.209656  500152 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.387374417s)
	I1018 10:36:33.209921  500152 api_server.go:72] duration metric: took 11.608245627s to wait for apiserver process to appear ...
	I1018 10:36:33.209972  500152 api_server.go:88] waiting for apiserver healthz status ...
	I1018 10:36:33.210027  500152 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 10:36:33.213029  500152 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-027087 addons enable metrics-server
	
	I1018 10:36:33.215887  500152 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 10:36:32.078666  499205 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001750584s
	I1018 10:36:32.095169  499205 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 10:36:32.095279  499205 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 10:36:32.095378  499205 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 10:36:32.095465  499205 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 10:36:33.218709  500152 addons.go:514] duration metric: took 11.616667913s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 10:36:33.239721  500152 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 10:36:33.240735  500152 api_server.go:141] control plane version: v1.34.1
	I1018 10:36:33.240757  500152 api_server.go:131] duration metric: took 30.756643ms to wait for apiserver health ...
	I1018 10:36:33.240766  500152 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 10:36:33.245607  500152 system_pods.go:59] 8 kube-system pods found
	I1018 10:36:33.245704  500152 system_pods.go:61] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:36:33.245731  500152 system_pods.go:61] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:36:33.245771  500152 system_pods.go:61] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:36:33.245797  500152 system_pods.go:61] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:36:33.245821  500152 system_pods.go:61] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:36:33.245858  500152 system_pods.go:61] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:36:33.245884  500152 system_pods.go:61] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:36:33.245904  500152 system_pods.go:61] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Running
	I1018 10:36:33.245939  500152 system_pods.go:74] duration metric: took 5.167032ms to wait for pod list to return data ...
	I1018 10:36:33.245963  500152 default_sa.go:34] waiting for default service account to be created ...
	I1018 10:36:33.259167  500152 default_sa.go:45] found service account: "default"
	I1018 10:36:33.259190  500152 default_sa.go:55] duration metric: took 13.210425ms for default service account to be created ...
	I1018 10:36:33.259199  500152 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 10:36:33.263107  500152 system_pods.go:86] 8 kube-system pods found
	I1018 10:36:33.263188  500152 system_pods.go:89] "coredns-66bc5c9577-wt4wd" [ff570964-d787-4c47-a498-4ac05ed09b0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 10:36:33.263213  500152 system_pods.go:89] "etcd-no-preload-027087" [df0b81be-5ccd-481d-88e8-0a351635eab5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 10:36:33.263233  500152 system_pods.go:89] "kindnet-t9q5g" [4286ff28-6eca-4678-9d54-3a2dbe9bf8d1] Running
	I1018 10:36:33.263282  500152 system_pods.go:89] "kube-apiserver-no-preload-027087" [949b1bb0-6625-40d4-b2a4-75e49fd87133] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 10:36:33.263306  500152 system_pods.go:89] "kube-controller-manager-no-preload-027087" [1395022f-1ef0-43f8-b175-f5c5fdfdb777] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 10:36:33.263343  500152 system_pods.go:89] "kube-proxy-s87k4" [2e127631-8e09-43da-8d5a-7238894eedac] Running
	I1018 10:36:33.263367  500152 system_pods.go:89] "kube-scheduler-no-preload-027087" [dd112b07-cc98-4f21-8211-3ac896ec0be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 10:36:33.263386  500152 system_pods.go:89] "storage-provisioner" [b6343f75-ba5e-48f6-8eec-5343cabc28a4] Running
	I1018 10:36:33.263423  500152 system_pods.go:126] duration metric: took 4.203574ms to wait for k8s-apps to be running ...
	I1018 10:36:33.263448  500152 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 10:36:33.263535  500152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:36:33.299892  500152 system_svc.go:56] duration metric: took 36.436052ms WaitForService to wait for kubelet
	I1018 10:36:33.299970  500152 kubeadm.go:586] duration metric: took 11.698293925s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 10:36:33.300002  500152 node_conditions.go:102] verifying NodePressure condition ...
	I1018 10:36:33.310360  500152 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 10:36:33.310439  500152 node_conditions.go:123] node cpu capacity is 2
	I1018 10:36:33.310467  500152 node_conditions.go:105] duration metric: took 10.443678ms to run NodePressure ...
	I1018 10:36:33.310491  500152 start.go:241] waiting for startup goroutines ...
	I1018 10:36:33.310524  500152 start.go:246] waiting for cluster config update ...
	I1018 10:36:33.310553  500152 start.go:255] writing updated cluster config ...
	I1018 10:36:33.310887  500152 ssh_runner.go:195] Run: rm -f paused
	I1018 10:36:33.321602  500152 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:36:33.326077  500152 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 10:36:35.333108  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:36:36.244160  499205 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.149079229s
	I1018 10:36:40.593427  499205 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.497898011s
	I1018 10:36:42.096870  499205 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.001959644s
	I1018 10:36:42.136510  499205 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 10:36:42.167918  499205 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 10:36:42.194207  499205 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 10:36:42.194431  499205 kubeadm.go:318] [mark-control-plane] Marking the node auto-881658 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 10:36:42.214798  499205 kubeadm.go:318] [bootstrap-token] Using token: crgxz9.45dtxljsereikmmm
	W1018 10:36:37.835460  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:40.333748  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:36:42.218059  499205 out.go:252]   - Configuring RBAC rules ...
	I1018 10:36:42.218197  499205 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 10:36:42.231650  499205 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 10:36:42.251200  499205 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 10:36:42.261375  499205 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 10:36:42.267581  499205 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 10:36:42.273829  499205 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 10:36:42.505679  499205 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 10:36:43.055114  499205 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 10:36:43.507581  499205 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 10:36:43.509429  499205 kubeadm.go:318] 
	I1018 10:36:43.509520  499205 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 10:36:43.509530  499205 kubeadm.go:318] 
	I1018 10:36:43.509613  499205 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 10:36:43.509622  499205 kubeadm.go:318] 
	I1018 10:36:43.509649  499205 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 10:36:43.509715  499205 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 10:36:43.509772  499205 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 10:36:43.509780  499205 kubeadm.go:318] 
	I1018 10:36:43.509837  499205 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 10:36:43.509845  499205 kubeadm.go:318] 
	I1018 10:36:43.509895  499205 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 10:36:43.509903  499205 kubeadm.go:318] 
	I1018 10:36:43.509963  499205 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 10:36:43.510048  499205 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 10:36:43.510123  499205 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 10:36:43.510132  499205 kubeadm.go:318] 
	I1018 10:36:43.510226  499205 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 10:36:43.510311  499205 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 10:36:43.510320  499205 kubeadm.go:318] 
	I1018 10:36:43.510409  499205 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token crgxz9.45dtxljsereikmmm \
	I1018 10:36:43.510520  499205 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 \
	I1018 10:36:43.510545  499205 kubeadm.go:318] 	--control-plane 
	I1018 10:36:43.510553  499205 kubeadm.go:318] 
	I1018 10:36:43.510642  499205 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 10:36:43.510650  499205 kubeadm.go:318] 
	I1018 10:36:43.510735  499205 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token crgxz9.45dtxljsereikmmm \
	I1018 10:36:43.510842  499205 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:541549c65ac17fcd9bbb95726b404ce3c499240091326a780b28888130ed8397 
	I1018 10:36:43.517542  499205 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 10:36:43.517797  499205 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 10:36:43.517914  499205 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 10:36:43.517999  499205 cni.go:84] Creating CNI manager for ""
	I1018 10:36:43.518029  499205 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 10:36:43.524710  499205 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 10:36:43.527727  499205 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 10:36:43.541278  499205 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 10:36:43.541298  499205 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 10:36:43.580786  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 10:36:44.698201  499205 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.117383107s)
	I1018 10:36:44.698287  499205 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 10:36:44.698453  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:44.698571  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-881658 minikube.k8s.io/updated_at=2025_10_18T10_36_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=auto-881658 minikube.k8s.io/primary=true
	I1018 10:36:45.069959  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:45.069801  499205 ops.go:34] apiserver oom_adj: -16
	I1018 10:36:45.570786  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1018 10:36:42.834507  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:44.835883  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:46.840519  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:36:46.070728  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:46.570417  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:47.070896  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:47.570347  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:48.070033  499205 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 10:36:48.273864  499205 kubeadm.go:1113] duration metric: took 3.575459639s to wait for elevateKubeSystemPrivileges
	I1018 10:36:48.273889  499205 kubeadm.go:402] duration metric: took 28.811141561s to StartCluster
	I1018 10:36:48.273906  499205 settings.go:142] acquiring lock: {Name:mk1ee79131e10a87f8e55f54baa97056ed313683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:48.273971  499205 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:36:48.274948  499205 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/kubeconfig: {Name:mk0d0d3cc8073f8115dc4e8a6e2806a24867cf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 10:36:48.275167  499205 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 10:36:48.275297  499205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 10:36:48.275550  499205 config.go:182] Loaded profile config "auto-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:36:48.275581  499205 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 10:36:48.275640  499205 addons.go:69] Setting storage-provisioner=true in profile "auto-881658"
	I1018 10:36:48.275654  499205 addons.go:238] Setting addon storage-provisioner=true in "auto-881658"
	I1018 10:36:48.275675  499205 host.go:66] Checking if "auto-881658" exists ...
	I1018 10:36:48.276198  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:48.276728  499205 addons.go:69] Setting default-storageclass=true in profile "auto-881658"
	I1018 10:36:48.276749  499205 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-881658"
	I1018 10:36:48.277021  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:48.284253  499205 out.go:179] * Verifying Kubernetes components...
	I1018 10:36:48.289825  499205 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 10:36:48.321296  499205 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 10:36:48.322819  499205 addons.go:238] Setting addon default-storageclass=true in "auto-881658"
	I1018 10:36:48.322855  499205 host.go:66] Checking if "auto-881658" exists ...
	I1018 10:36:48.323262  499205 cli_runner.go:164] Run: docker container inspect auto-881658 --format={{.State.Status}}
	I1018 10:36:48.327234  499205 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:36:48.327255  499205 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 10:36:48.327324  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:48.373244  499205 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 10:36:48.373266  499205 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 10:36:48.373331  499205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-881658
	I1018 10:36:48.380137  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:48.408055  499205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/auto-881658/id_rsa Username:docker}
	I1018 10:36:48.867256  499205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 10:36:48.899392  499205 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 10:36:49.102593  499205 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 10:36:49.102705  499205 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 10:36:50.316487  499205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.449147954s)
	I1018 10:36:50.316590  499205 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.417123425s)
	I1018 10:36:50.316649  499205 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.214034135s)
	I1018 10:36:50.316614  499205 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.213882346s)
	I1018 10:36:50.317732  499205 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 10:36:50.319169  499205 node_ready.go:35] waiting up to 15m0s for node "auto-881658" to be "Ready" ...
	I1018 10:36:50.368896  499205 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 10:36:50.371700  499205 addons.go:514] duration metric: took 2.096097811s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1018 10:36:49.338083  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:51.832877  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:36:50.821589  499205 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-881658" context rescaled to 1 replicas
	W1018 10:36:52.321968  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:36:54.822742  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:36:54.332057  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:56.832819  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:36:57.322488  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:36:59.822256  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:36:59.332007  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:01.833085  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:01.824782  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:04.322785  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:04.331680  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:06.832643  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:06.823069  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:09.322019  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:09.331177  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	W1018 10:37:11.333175  500152 pod_ready.go:104] pod "coredns-66bc5c9577-wt4wd" is not "Ready", error: <nil>
	I1018 10:37:13.332180  500152 pod_ready.go:94] pod "coredns-66bc5c9577-wt4wd" is "Ready"
	I1018 10:37:13.332213  500152 pod_ready.go:86] duration metric: took 40.0061067s for pod "coredns-66bc5c9577-wt4wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.335472  500152 pod_ready.go:83] waiting for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.340149  500152 pod_ready.go:94] pod "etcd-no-preload-027087" is "Ready"
	I1018 10:37:13.340177  500152 pod_ready.go:86] duration metric: took 4.675377ms for pod "etcd-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.342433  500152 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.347302  500152 pod_ready.go:94] pod "kube-apiserver-no-preload-027087" is "Ready"
	I1018 10:37:13.347331  500152 pod_ready.go:86] duration metric: took 4.869488ms for pod "kube-apiserver-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.349923  500152 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.530669  500152 pod_ready.go:94] pod "kube-controller-manager-no-preload-027087" is "Ready"
	I1018 10:37:13.530700  500152 pod_ready.go:86] duration metric: took 180.750984ms for pod "kube-controller-manager-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:13.730767  500152 pod_ready.go:83] waiting for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:14.130717  500152 pod_ready.go:94] pod "kube-proxy-s87k4" is "Ready"
	I1018 10:37:14.130751  500152 pod_ready.go:86] duration metric: took 399.906388ms for pod "kube-proxy-s87k4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:14.330101  500152 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:14.730141  500152 pod_ready.go:94] pod "kube-scheduler-no-preload-027087" is "Ready"
	I1018 10:37:14.730166  500152 pod_ready.go:86] duration metric: took 400.040489ms for pod "kube-scheduler-no-preload-027087" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 10:37:14.730179  500152 pod_ready.go:40] duration metric: took 41.408544188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 10:37:14.784915  500152 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 10:37:14.787968  500152 out.go:179] * Done! kubectl is now configured to use "no-preload-027087" cluster and "default" namespace by default
	W1018 10:37:11.323097  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:13.822000  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:15.822375  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:18.322607  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:20.822930  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:23.322012  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:25.822359  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:28.322638  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	W1018 10:37:30.327532  499205 node_ready.go:57] node "auto-881658" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.57465359Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a279d2a-73e8-4a02-8ba0-1e3c3bebeba4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.575988205Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=60c2199b-ac06-4d78-ae93-89501f90d60d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.576960451Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2/dashboard-metrics-scraper" id=d7f77df7-5988-46dd-97ca-b7e6dac77387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.577267483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.584972355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.586421933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.603773464Z" level=info msg="Created container e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2/dashboard-metrics-scraper" id=d7f77df7-5988-46dd-97ca-b7e6dac77387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.604489559Z" level=info msg="Starting container: e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7" id=f0e528e5-9411-4e55-8f3b-16fba2552037 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 10:37:07 no-preload-027087 crio[651]: time="2025-10-18T10:37:07.606293956Z" level=info msg="Started container" PID=1634 containerID=e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2/dashboard-metrics-scraper id=f0e528e5-9411-4e55-8f3b-16fba2552037 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f95c9b99798ada6c0d78df5de706f078e3cb01fa10d1142a4b3d1642eb78d602
	Oct 18 10:37:07 no-preload-027087 conmon[1632]: conmon e007cc5af2d622f31ced <ninfo>: container 1634 exited with status 1
	Oct 18 10:37:08 no-preload-027087 crio[651]: time="2025-10-18T10:37:08.145095157Z" level=info msg="Removing container: 4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742" id=dbf6634f-ad67-4c57-a62b-84ea67e0c507 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:37:08 no-preload-027087 crio[651]: time="2025-10-18T10:37:08.167973398Z" level=info msg="Error loading conmon cgroup of container 4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742: cgroup deleted" id=dbf6634f-ad67-4c57-a62b-84ea67e0c507 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:37:08 no-preload-027087 crio[651]: time="2025-10-18T10:37:08.175788139Z" level=info msg="Removed container 4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2/dashboard-metrics-scraper" id=dbf6634f-ad67-4c57-a62b-84ea67e0c507 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.670975146Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.678468947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.678505699Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.67852942Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.682644961Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.682685241Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.682711457Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.685933645Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.685968386Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.685992394Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.689435582Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 10:37:11 no-preload-027087 crio[651]: time="2025-10-18T10:37:11.68947006Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e007cc5af2d62       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   f95c9b99798ad       dashboard-metrics-scraper-6ffb444bf9-vvmt2   kubernetes-dashboard
	2cbdb2a8528e4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           29 seconds ago       Running             storage-provisioner         2                   385f1b53b202e       storage-provisioner                          kube-system
	9919fe4eee7dc       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   4643811be6b5e       kubernetes-dashboard-855c9754f9-trfvl        kubernetes-dashboard
	dc488ebaa6807       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   a5cbbf8b12daa       busybox                                      default
	4b114ce56de2f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   2f616200b15c3       coredns-66bc5c9577-wt4wd                     kube-system
	0de3795567e7d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   40fa6120b28bd       kube-proxy-s87k4                             kube-system
	6868199e0f045       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   ae050e65d4683       kindnet-t9q5g                                kube-system
	c22c014947e9e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           About a minute ago   Exited              storage-provisioner         1                   385f1b53b202e       storage-provisioner                          kube-system
	d968383151da8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6816bd5ea7d9f       kube-apiserver-no-preload-027087             kube-system
	5238dbc53ff79       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   531f96e4818b3       kube-controller-manager-no-preload-027087    kube-system
	e261c5b0adde6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   7994e71e175a6       etcd-no-preload-027087                       kube-system
	7fcb9a21d1a31       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   fb697fb3a0920       kube-scheduler-no-preload-027087             kube-system
	
	
	==> coredns [4b114ce56de2ff36fd41657a70702954670fd16b567eaf13b39d0991c0e0a02b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39925 - 10923 "HINFO IN 7437050735640074136.3276881364983829134. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024649489s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-027087
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-027087
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=no-preload-027087
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T10_35_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 10:35:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-027087
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 10:37:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 10:37:01 +0000   Sat, 18 Oct 2025 10:35:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 10:37:01 +0000   Sat, 18 Oct 2025 10:35:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 10:37:01 +0000   Sat, 18 Oct 2025 10:35:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 10:37:01 +0000   Sat, 18 Oct 2025 10:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-027087
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                bcb80226-a3a4-43ba-81ed-aa5457f89057
	  Boot ID:                    b8624f98-ff95-47b1-8620-7f364ebc5167
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-wt4wd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 etcd-no-preload-027087                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m8s
	  kube-system                 kindnet-t9q5g                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-no-preload-027087              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-no-preload-027087     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-s87k4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-no-preload-027087              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vvmt2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-trfvl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m1s                   kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Warning  CgroupV1                 2m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node no-preload-027087 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node no-preload-027087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s (x8 over 2m19s)  kubelet          Node no-preload-027087 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m9s                   kubelet          Node no-preload-027087 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m9s                   kubelet          Node no-preload-027087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s                   kubelet          Node no-preload-027087 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m9s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m5s                   node-controller  Node no-preload-027087 event: Registered Node no-preload-027087 in Controller
	  Normal   NodeReady                107s                   kubelet          Node no-preload-027087 status is now: NodeReady
	  Normal   Starting                 72s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)      kubelet          Node no-preload-027087 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)      kubelet          Node no-preload-027087 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)      kubelet          Node no-preload-027087 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node no-preload-027087 event: Registered Node no-preload-027087 in Controller
	
	
	==> dmesg <==
	[Oct18 10:17] overlayfs: idmapped layers are currently not supported
	[ +23.839207] overlayfs: idmapped layers are currently not supported
	[Oct18 10:18] overlayfs: idmapped layers are currently not supported
	[ +26.047183] overlayfs: idmapped layers are currently not supported
	[Oct18 10:19] overlayfs: idmapped layers are currently not supported
	[Oct18 10:21] overlayfs: idmapped layers are currently not supported
	[ +55.677340] overlayfs: idmapped layers are currently not supported
	[  +3.870584] overlayfs: idmapped layers are currently not supported
	[Oct18 10:24] overlayfs: idmapped layers are currently not supported
	[ +31.226998] overlayfs: idmapped layers are currently not supported
	[Oct18 10:27] overlayfs: idmapped layers are currently not supported
	[ +41.576921] overlayfs: idmapped layers are currently not supported
	[  +5.117406] overlayfs: idmapped layers are currently not supported
	[Oct18 10:28] overlayfs: idmapped layers are currently not supported
	[Oct18 10:29] overlayfs: idmapped layers are currently not supported
	[Oct18 10:30] overlayfs: idmapped layers are currently not supported
	[Oct18 10:31] overlayfs: idmapped layers are currently not supported
	[  +3.453230] overlayfs: idmapped layers are currently not supported
	[Oct18 10:33] overlayfs: idmapped layers are currently not supported
	[  +6.524055] overlayfs: idmapped layers are currently not supported
	[Oct18 10:34] overlayfs: idmapped layers are currently not supported
	[Oct18 10:35] overlayfs: idmapped layers are currently not supported
	[ +27.675349] overlayfs: idmapped layers are currently not supported
	[Oct18 10:36] overlayfs: idmapped layers are currently not supported
	[ +11.230155] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e261c5b0adde6796a1e7af7d2200022c257e6f59c693f0219b6f283cde6d5b44] <==
	{"level":"warn","ts":"2025-10-18T10:36:27.565752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.614243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.661971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.737459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.765564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.814776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.826175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.852014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.868352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.885753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.909068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.931285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.965623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:27.999622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.020171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.049494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.071180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.097863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.132798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.162206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.197179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.231530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.253705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.282025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T10:36:28.413027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40798","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:32 up  2:20,  0 user,  load average: 3.84, 4.42, 3.54
	Linux no-preload-027087 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6868199e0f045baf4d0c7a7f0f549c97259e341becc1e091f19130b6f1755866] <==
	I1018 10:36:31.304048       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 10:36:31.309326       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 10:36:31.309482       1 main.go:148] setting mtu 1500 for CNI 
	I1018 10:36:31.309496       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 10:36:31.309506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T10:36:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 10:36:31.669618       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 10:36:31.669637       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 10:36:31.669647       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 10:36:31.669919       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 10:37:01.668579       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 10:37:01.670533       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 10:37:01.670533       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 10:37:01.670637       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 10:37:02.870139       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 10:37:02.870171       1 metrics.go:72] Registering metrics
	I1018 10:37:02.870546       1 controller.go:711] "Syncing nftables rules"
	I1018 10:37:11.670671       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:37:11.670723       1 main.go:301] handling current node
	I1018 10:37:21.672501       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:37:21.672539       1 main.go:301] handling current node
	I1018 10:37:31.683013       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 10:37:31.683043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d968383151da802efc708a7893731beff322978e9b5c6aca61c66a9890a4c2a7] <==
	I1018 10:36:30.269661       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 10:36:30.269707       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 10:36:30.292905       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 10:36:30.293287       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 10:36:30.332634       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 10:36:30.339357       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 10:36:30.339390       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 10:36:30.339499       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 10:36:30.345406       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 10:36:30.357267       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 10:36:30.357299       1 policy_source.go:240] refreshing policies
	I1018 10:36:30.366450       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 10:36:30.430810       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1018 10:36:30.454320       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 10:36:30.504815       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 10:36:30.630921       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 10:36:32.393911       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 10:36:32.630352       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 10:36:32.818905       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 10:36:32.874656       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 10:36:33.130271       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.153.86"}
	I1018 10:36:33.195252       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.225.52"}
	I1018 10:36:34.281090       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 10:36:34.374605       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 10:36:34.629789       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5238dbc53ff79046c10165b63aa29a7982380bb94f85339a7f129ae1992c4868] <==
	I1018 10:36:34.249799       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 10:36:34.249966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 10:36:34.250044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 10:36:34.250072       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 10:36:34.257446       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 10:36:34.257448       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 10:36:34.257606       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 10:36:34.261335       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:36:34.273255       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 10:36:34.273316       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 10:36:34.273351       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 10:36:34.273379       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 10:36:34.273391       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 10:36:34.273397       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 10:36:34.273457       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 10:36:34.273489       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 10:36:34.273477       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 10:36:34.282218       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 10:36:34.282218       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 10:36:34.287570       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 10:36:34.287834       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 10:36:34.288752       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 10:36:34.303376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 10:36:34.303472       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 10:36:34.304649       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	
	
	==> kube-proxy [0de3795567e7dc2268ccf4ed71cc0a8ca7702aa8ac6ca751af108c5769adf6aa] <==
	I1018 10:36:32.550975       1 server_linux.go:53] "Using iptables proxy"
	I1018 10:36:32.902951       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 10:36:33.003369       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 10:36:33.003421       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 10:36:33.003518       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 10:36:33.448333       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 10:36:33.448391       1 server_linux.go:132] "Using iptables Proxier"
	I1018 10:36:33.486751       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 10:36:33.487079       1 server.go:527] "Version info" version="v1.34.1"
	I1018 10:36:33.487103       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:36:33.488313       1 config.go:200] "Starting service config controller"
	I1018 10:36:33.488396       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 10:36:33.500522       1 config.go:106] "Starting endpoint slice config controller"
	I1018 10:36:33.500597       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 10:36:33.500647       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 10:36:33.500674       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 10:36:33.525926       1 config.go:309] "Starting node config controller"
	I1018 10:36:33.577364       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 10:36:33.585219       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 10:36:33.590496       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 10:36:33.601310       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 10:36:33.601355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7fcb9a21d1a3177d9033d4cb769bd9b7f55c25b4643124089cf2f78a928074e9] <==
	I1018 10:36:30.241090       1 serving.go:386] Generated self-signed cert in-memory
	I1018 10:36:35.525799       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 10:36:35.525839       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 10:36:35.530973       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 10:36:35.531327       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 10:36:35.531349       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 10:36:35.531373       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 10:36:35.532262       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:36:35.532287       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:36:35.532307       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:36:35.532329       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:36:35.632769       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 10:36:35.632826       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 10:36:35.637390       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 18 10:36:31 no-preload-027087 kubelet[769]: W1018 10:36:31.480326     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/crio-a5cbbf8b12daa87898402fa639805ad2dc0a438a3fa39295961f78b22623c2f6 WatchSource:0}: Error finding container a5cbbf8b12daa87898402fa639805ad2dc0a438a3fa39295961f78b22623c2f6: Status 404 returned error can't find the container with id a5cbbf8b12daa87898402fa639805ad2dc0a438a3fa39295961f78b22623c2f6
	Oct 18 10:36:35 no-preload-027087 kubelet[769]: I1018 10:36:35.756167     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvc9q\" (UniqueName: \"kubernetes.io/projected/4735cd3f-7f8f-4c4f-b3db-8a6544223c4e-kube-api-access-jvc9q\") pod \"kubernetes-dashboard-855c9754f9-trfvl\" (UID: \"4735cd3f-7f8f-4c4f-b3db-8a6544223c4e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-trfvl"
	Oct 18 10:36:35 no-preload-027087 kubelet[769]: I1018 10:36:35.756230     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/885d1c16-9a7e-4c1c-bfff-6ed345623dc1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vvmt2\" (UID: \"885d1c16-9a7e-4c1c-bfff-6ed345623dc1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2"
	Oct 18 10:36:35 no-preload-027087 kubelet[769]: I1018 10:36:35.756257     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4735cd3f-7f8f-4c4f-b3db-8a6544223c4e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-trfvl\" (UID: \"4735cd3f-7f8f-4c4f-b3db-8a6544223c4e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-trfvl"
	Oct 18 10:36:35 no-preload-027087 kubelet[769]: I1018 10:36:35.756283     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdh2g\" (UniqueName: \"kubernetes.io/projected/885d1c16-9a7e-4c1c-bfff-6ed345623dc1-kube-api-access-hdh2g\") pod \"dashboard-metrics-scraper-6ffb444bf9-vvmt2\" (UID: \"885d1c16-9a7e-4c1c-bfff-6ed345623dc1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2"
	Oct 18 10:36:36 no-preload-027087 kubelet[769]: W1018 10:36:36.037771     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f282a9c13400aaa2f92179c119f5bfdfe267ffb2dbfb3781e7a021c4b77deb75/crio-f95c9b99798ada6c0d78df5de706f078e3cb01fa10d1142a4b3d1642eb78d602 WatchSource:0}: Error finding container f95c9b99798ada6c0d78df5de706f078e3cb01fa10d1142a4b3d1642eb78d602: Status 404 returned error can't find the container with id f95c9b99798ada6c0d78df5de706f078e3cb01fa10d1142a4b3d1642eb78d602
	Oct 18 10:36:44 no-preload-027087 kubelet[769]: I1018 10:36:44.078191     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-trfvl" podStartSLOduration=2.361248552 podStartE2EDuration="10.078173287s" podCreationTimestamp="2025-10-18 10:36:34 +0000 UTC" firstStartedPulling="2025-10-18 10:36:36.013098064 +0000 UTC m=+15.742529832" lastFinishedPulling="2025-10-18 10:36:43.730022717 +0000 UTC m=+23.459454567" observedRunningTime="2025-10-18 10:36:44.077754722 +0000 UTC m=+23.807186498" watchObservedRunningTime="2025-10-18 10:36:44.078173287 +0000 UTC m=+23.807605063"
	Oct 18 10:36:50 no-preload-027087 kubelet[769]: I1018 10:36:50.073967     769 scope.go:117] "RemoveContainer" containerID="ef30c02983eb28b7a364891a2e2ec0e59647874986e96b65631a811dd21cdfc3"
	Oct 18 10:36:51 no-preload-027087 kubelet[769]: I1018 10:36:51.079012     769 scope.go:117] "RemoveContainer" containerID="ef30c02983eb28b7a364891a2e2ec0e59647874986e96b65631a811dd21cdfc3"
	Oct 18 10:36:51 no-preload-027087 kubelet[769]: I1018 10:36:51.079631     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:36:51 no-preload-027087 kubelet[769]: E1018 10:36:51.079966     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:36:52 no-preload-027087 kubelet[769]: I1018 10:36:52.083492     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:36:52 no-preload-027087 kubelet[769]: E1018 10:36:52.083689     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:36:55 no-preload-027087 kubelet[769]: I1018 10:36:55.976150     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:36:55 no-preload-027087 kubelet[769]: E1018 10:36:55.976385     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:37:02 no-preload-027087 kubelet[769]: I1018 10:37:02.112333     769 scope.go:117] "RemoveContainer" containerID="c22c014947e9e9dc024d5d72f215ef4605e6ee6ca05a8753ddd66dd51ee9561c"
	Oct 18 10:37:07 no-preload-027087 kubelet[769]: I1018 10:37:07.573661     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:37:08 no-preload-027087 kubelet[769]: I1018 10:37:08.138988     769 scope.go:117] "RemoveContainer" containerID="4ff408408dfa168e0c6a4f48161b231dfb33d73bd601499c276a66e4f3b1a742"
	Oct 18 10:37:08 no-preload-027087 kubelet[769]: I1018 10:37:08.141515     769 scope.go:117] "RemoveContainer" containerID="e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7"
	Oct 18 10:37:08 no-preload-027087 kubelet[769]: E1018 10:37:08.143016     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:37:15 no-preload-027087 kubelet[769]: I1018 10:37:15.975898     769 scope.go:117] "RemoveContainer" containerID="e007cc5af2d622f31ced7fa509429c00f7b2e44a9cd37dcfbec526228eb011e7"
	Oct 18 10:37:15 no-preload-027087 kubelet[769]: E1018 10:37:15.976068     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vvmt2_kubernetes-dashboard(885d1c16-9a7e-4c1c-bfff-6ed345623dc1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vvmt2" podUID="885d1c16-9a7e-4c1c-bfff-6ed345623dc1"
	Oct 18 10:37:27 no-preload-027087 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 10:37:27 no-preload-027087 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 10:37:27 no-preload-027087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9919fe4eee7dc51c131498b9e1e50e76edc9753040feea7bff2ec0193354e184] <==
	2025/10/18 10:36:43 Starting overwatch
	2025/10/18 10:36:43 Using namespace: kubernetes-dashboard
	2025/10/18 10:36:43 Using in-cluster config to connect to apiserver
	2025/10/18 10:36:43 Using secret token for csrf signing
	2025/10/18 10:36:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 10:36:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 10:36:43 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 10:36:43 Generating JWE encryption key
	2025/10/18 10:36:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 10:36:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 10:36:44 Initializing JWE encryption key from synchronized object
	2025/10/18 10:36:44 Creating in-cluster Sidecar client
	2025/10/18 10:36:44 Serving insecurely on HTTP port: 9090
	2025/10/18 10:36:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 10:37:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2cbdb2a8528e4250452cbcfde4d0a6d774dfa919eece0abfe3baf1ff93f2c38d] <==
	W1018 10:37:02.197553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:05.659989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:09.920138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:13.518756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:16.571662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:19.593689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:19.598687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:37:19.598845       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 10:37:19.599005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-027087_3034f84f-0d58-40f5-901d-adb53244db78!
	I1018 10:37:19.601031       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2f18fe6-030e-454a-877d-bce5a2ea2a3e", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-027087_3034f84f-0d58-40f5-901d-adb53244db78 became leader
	W1018 10:37:19.602488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:19.608380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 10:37:19.700628       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-027087_3034f84f-0d58-40f5-901d-adb53244db78!
	W1018 10:37:21.611813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:21.618908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:23.622276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:23.626939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:25.631256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:25.638369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:27.641793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:27.647654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:29.659983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:29.666266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:31.673039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 10:37:31.685759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c22c014947e9e9dc024d5d72f215ef4605e6ee6ca05a8753ddd66dd51ee9561c] <==
	I1018 10:36:31.610412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 10:37:01.616833       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-027087 -n no-preload-027087
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-027087 -n no-preload-027087: exit status 2 (381.240388ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-027087 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.65s)
E1018 10:43:21.937684  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (259/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.22
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.39
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.14
18 TestDownloadOnly/v1.34.1/DeleteAll 0.35
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.26
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 174.91
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.84
48 TestAddons/StoppedEnableDisable 12.41
49 TestCertOptions 36.54
50 TestCertExpiration 243.27
52 TestForceSystemdFlag 38.55
53 TestForceSystemdEnv 47.24
59 TestErrorSpam/setup 34.04
60 TestErrorSpam/start 0.76
61 TestErrorSpam/status 1.14
62 TestErrorSpam/pause 6.6
63 TestErrorSpam/unpause 5.25
64 TestErrorSpam/stop 1.52
67 TestFunctional/serial/CopySyncFile 0.01
68 TestFunctional/serial/StartWithProxy 81.79
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 27.08
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.57
76 TestFunctional/serial/CacheCmd/cache/add_local 1.09
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.17
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 52.64
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.48
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 3.95
90 TestFunctional/parallel/ConfigCmd 0.43
91 TestFunctional/parallel/DashboardCmd 10.32
92 TestFunctional/parallel/DryRun 0.43
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.09
99 TestFunctional/parallel/AddonsCmd 0.18
100 TestFunctional/parallel/PersistentVolumeClaim 27.08
102 TestFunctional/parallel/SSHCmd 0.75
103 TestFunctional/parallel/CpCmd 2.33
105 TestFunctional/parallel/FileSync 0.37
106 TestFunctional/parallel/CertSync 2.27
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.85
114 TestFunctional/parallel/License 0.4
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
128 TestFunctional/parallel/ProfileCmd/profile_list 0.42
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
130 TestFunctional/parallel/MountCmd/any-port 8.01
131 TestFunctional/parallel/MountCmd/specific-port 2.12
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
133 TestFunctional/parallel/ServiceCmd/List 0.64
134 TestFunctional/parallel/ServiceCmd/JSONOutput 1.4
138 TestFunctional/parallel/Version/short 0.07
139 TestFunctional/parallel/Version/components 1.43
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.94
145 TestFunctional/parallel/ImageCommands/Setup 0.65
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.8
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 201.38
164 TestMultiControlPlane/serial/DeployApp 37.87
165 TestMultiControlPlane/serial/PingHostFromPods 1.47
166 TestMultiControlPlane/serial/AddWorkerNode 60.95
167 TestMultiControlPlane/serial/NodeLabels 0.12
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
169 TestMultiControlPlane/serial/CopyFile 20.16
170 TestMultiControlPlane/serial/StopSecondaryNode 12.87
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
172 TestMultiControlPlane/serial/RestartSecondaryNode 28.83
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 210.36
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.26
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
177 TestMultiControlPlane/serial/StopCluster 36.23
178 TestMultiControlPlane/serial/RestartCluster 69.06
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
180 TestMultiControlPlane/serial/AddSecondaryNode 78.54
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 86.6
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.82
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 37.52
211 TestKicCustomNetwork/use_default_bridge_network 36.72
212 TestKicExistingNetwork 36.27
213 TestKicCustomSubnet 38.26
214 TestKicStaticIP 34.53
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 72.63
219 TestMountStart/serial/StartWithMountFirst 7.23
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 9.2
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.74
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.28
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 139.02
231 TestMultiNode/serial/DeployApp2Nodes 5.28
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 56.45
234 TestMultiNode/serial/MultiNodeLabels 0.12
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.34
237 TestMultiNode/serial/StopNode 2.62
238 TestMultiNode/serial/StartAfterStop 8.37
239 TestMultiNode/serial/RestartKeepsNodes 76.4
240 TestMultiNode/serial/DeleteNode 5.68
241 TestMultiNode/serial/StopMultiNode 24.01
242 TestMultiNode/serial/RestartMultiNode 56.21
243 TestMultiNode/serial/ValidateNameConflict 35.21
248 TestPreload 123.09
253 TestInsufficientStorage 13.33
254 TestRunningBinaryUpgrade 54.48
256 TestKubernetesUpgrade 208.14
257 TestMissingContainerUpgrade 120.33
259 TestPause/serial/Start 90.52
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
262 TestNoKubernetes/serial/StartWithK8s 45.02
263 TestNoKubernetes/serial/StartWithStopK8s 7.43
264 TestNoKubernetes/serial/Start 9.63
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
266 TestNoKubernetes/serial/ProfileList 1.11
267 TestNoKubernetes/serial/Stop 1.3
268 TestNoKubernetes/serial/StartNoArgs 7.03
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
270 TestPause/serial/SecondStartNoReconfiguration 31.94
272 TestStoppedBinaryUpgrade/Setup 0.7
273 TestStoppedBinaryUpgrade/Upgrade 58.04
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
289 TestNetworkPlugins/group/false 4.7
294 TestStartStop/group/old-k8s-version/serial/FirstStart 61.6
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.35
297 TestStartStop/group/old-k8s-version/serial/Stop 11.98
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
299 TestStartStop/group/old-k8s-version/serial/SecondStart 52.68
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.78
307 TestStartStop/group/embed-certs/serial/FirstStart 87.32
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
309 TestStartStop/group/embed-certs/serial/DeployApp 9.33
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
313 TestStartStop/group/embed-certs/serial/Stop 12.01
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.78
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
317 TestStartStop/group/embed-certs/serial/SecondStart 63.95
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/no-preload/serial/FirstStart 76.23
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
329 TestStartStop/group/newest-cni/serial/FirstStart 45.32
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 1.61
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
334 TestStartStop/group/newest-cni/serial/SecondStart 15.87
335 TestStartStop/group/no-preload/serial/DeployApp 8.51
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/no-preload/serial/Stop 12.4
342 TestNetworkPlugins/group/auto/Start 87.92
343 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.58
344 TestStartStop/group/no-preload/serial/SecondStart 62.84
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
349 TestNetworkPlugins/group/auto/KubeletFlags 0.37
350 TestNetworkPlugins/group/auto/NetCatPod 11.37
351 TestNetworkPlugins/group/kindnet/Start 86.77
352 TestNetworkPlugins/group/auto/DNS 0.24
353 TestNetworkPlugins/group/auto/Localhost 0.16
354 TestNetworkPlugins/group/auto/HairPin 0.15
355 TestNetworkPlugins/group/calico/Start 61.82
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.38
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/KubeletFlags 0.32
361 TestNetworkPlugins/group/calico/NetCatPod 9.26
362 TestNetworkPlugins/group/kindnet/DNS 0.23
363 TestNetworkPlugins/group/kindnet/Localhost 0.19
364 TestNetworkPlugins/group/kindnet/HairPin 0.19
365 TestNetworkPlugins/group/calico/DNS 0.2
366 TestNetworkPlugins/group/calico/Localhost 0.21
367 TestNetworkPlugins/group/calico/HairPin 0.16
368 TestNetworkPlugins/group/custom-flannel/Start 75.61
369 TestNetworkPlugins/group/enable-default-cni/Start 78.1
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
372 TestNetworkPlugins/group/custom-flannel/DNS 0.17
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
380 TestNetworkPlugins/group/flannel/Start 71.33
381 TestNetworkPlugins/group/bridge/Start 57.44
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
383 TestNetworkPlugins/group/flannel/ControllerPod 6
384 TestNetworkPlugins/group/bridge/NetCatPod 11.28
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
386 TestNetworkPlugins/group/flannel/NetCatPod 10.31
387 TestNetworkPlugins/group/bridge/DNS 0.16
388 TestNetworkPlugins/group/bridge/Localhost 0.13
389 TestNetworkPlugins/group/bridge/HairPin 0.12
390 TestNetworkPlugins/group/flannel/DNS 0.16
391 TestNetworkPlugins/group/flannel/Localhost 0.15
392 TestNetworkPlugins/group/flannel/HairPin 0.17
x
+
TestDownloadOnly/v1.28.0/json-events (6.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-195254 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-195254 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.222212538s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 09:30:22.681056  295193 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 09:30:22.681143  295193 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-195254
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-195254: exit status 85 (95.045739ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-195254 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-195254 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:30:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:30:16.503597  295198 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:30:16.503748  295198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:16.503761  295198 out.go:374] Setting ErrFile to fd 2...
	I1018 09:30:16.503790  295198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:16.504077  295198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	W1018 09:30:16.504214  295198 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21764-293333/.minikube/config/config.json: open /home/jenkins/minikube-integration/21764-293333/.minikube/config/config.json: no such file or directory
	I1018 09:30:16.504621  295198 out.go:368] Setting JSON to true
	I1018 09:30:16.505489  295198 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4367,"bootTime":1760775450,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 09:30:16.505556  295198 start.go:141] virtualization:  
	I1018 09:30:16.509538  295198 out.go:99] [download-only-195254] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1018 09:30:16.509786  295198 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 09:30:16.509862  295198 notify.go:220] Checking for updates...
	I1018 09:30:16.512818  295198 out.go:171] MINIKUBE_LOCATION=21764
	I1018 09:30:16.515671  295198 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:30:16.518747  295198 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:30:16.521673  295198 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 09:30:16.524651  295198 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 09:30:16.530541  295198 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 09:30:16.530928  295198 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:30:16.561627  295198 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:30:16.561751  295198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:16.625736  295198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-18 09:30:16.616936454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:16.625842  295198 docker.go:318] overlay module found
	I1018 09:30:16.628913  295198 out.go:99] Using the docker driver based on user configuration
	I1018 09:30:16.628953  295198 start.go:305] selected driver: docker
	I1018 09:30:16.628965  295198 start.go:925] validating driver "docker" against <nil>
	I1018 09:30:16.629073  295198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:16.680469  295198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-18 09:30:16.671106554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:16.680626  295198 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:30:16.680920  295198 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 09:30:16.681076  295198 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 09:30:16.684320  295198 out.go:171] Using Docker driver with root privileges
	I1018 09:30:16.687310  295198 cni.go:84] Creating CNI manager for ""
	I1018 09:30:16.687385  295198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:30:16.687401  295198 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:30:16.687481  295198 start.go:349] cluster config:
	{Name:download-only-195254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-195254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:30:16.690494  295198 out.go:99] Starting "download-only-195254" primary control-plane node in "download-only-195254" cluster
	I1018 09:30:16.690514  295198 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:30:16.693281  295198 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:30:16.693307  295198 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:30:16.693457  295198 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:30:16.708495  295198 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 09:30:16.709349  295198 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 09:30:16.709455  295198 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 09:30:16.749581  295198 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 09:30:16.749607  295198 cache.go:58] Caching tarball of preloaded images
	I1018 09:30:16.749774  295198 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:30:16.753060  295198 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 09:30:16.753088  295198 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1018 09:30:16.844180  295198 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1018 09:30:16.844351  295198 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 09:30:20.552497  295198 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 09:30:20.552901  295198 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/download-only-195254/config.json ...
	I1018 09:30:20.552940  295198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/download-only-195254/config.json: {Name:mk5333d6b9b5afafa20b3fcf9c78b9224b355766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:30:20.553145  295198 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:30:20.554034  295198 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21764-293333/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-195254 host does not exist
	  To start a cluster, run: "minikube start -p download-only-195254"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-195254
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-370905 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-370905 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.389507421s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 09:30:27.527201  295193 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 09:30:27.527240  295193 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-293333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-370905
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-370905: exit status 85 (142.903771ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-195254 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-195254 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p download-only-195254                                                                                                                                                   │ download-only-195254 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -o=json --download-only -p download-only-370905 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-370905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:30:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:30:23.179460  295398 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:30:23.179615  295398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:23.179626  295398 out.go:374] Setting ErrFile to fd 2...
	I1018 09:30:23.179631  295398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:23.179914  295398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:30:23.180336  295398 out.go:368] Setting JSON to true
	I1018 09:30:23.181250  295398 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4374,"bootTime":1760775450,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 09:30:23.181323  295398 start.go:141] virtualization:  
	I1018 09:30:23.184898  295398 out.go:99] [download-only-370905] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:30:23.185231  295398 notify.go:220] Checking for updates...
	I1018 09:30:23.189266  295398 out.go:171] MINIKUBE_LOCATION=21764
	I1018 09:30:23.192325  295398 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:30:23.195324  295398 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:30:23.198413  295398 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 09:30:23.201407  295398 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 09:30:23.207207  295398 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 09:30:23.207542  295398 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:30:23.229401  295398 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:30:23.229524  295398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:23.286636  295398 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 09:30:23.277625384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:23.286746  295398 docker.go:318] overlay module found
	I1018 09:30:23.289759  295398 out.go:99] Using the docker driver based on user configuration
	I1018 09:30:23.289801  295398 start.go:305] selected driver: docker
	I1018 09:30:23.289825  295398 start.go:925] validating driver "docker" against <nil>
	I1018 09:30:23.289930  295398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:23.353893  295398 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 09:30:23.344612672 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:23.354066  295398 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:30:23.354355  295398 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 09:30:23.354505  295398 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 09:30:23.357522  295398 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-370905 host does not exist
	  To start a cluster, run: "minikube start -p download-only-370905"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-370905
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 09:30:29.385265  295193 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-816488 --alsologtostderr --binary-mirror http://127.0.0.1:39529 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-816488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-816488
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006674
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-006674: exit status 85 (90.055604ms)

                                                
                                                
-- stdout --
	* Profile "addons-006674" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006674"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006674
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-006674: exit status 85 (85.080059ms)

                                                
                                                
-- stdout --
	* Profile "addons-006674" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006674"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (174.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-006674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-006674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m54.911908307s)
--- PASS: TestAddons/Setup (174.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-006674 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-006674 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-006674 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-006674 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6a2b0da8-a07b-49f4-8b63-821347de2204] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6a2b0da8-a07b-49f4-8b63-821347de2204] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003036806s
addons_test.go:694: (dbg) Run:  kubectl --context addons-006674 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-006674 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-006674 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-006674 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-006674
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-006674: (12.11910786s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006674
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006674
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-006674
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (36.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-233372 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-233372 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.645071638s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-233372 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-233372 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-233372 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-233372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-233372
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-233372: (2.143449214s)
--- PASS: TestCertOptions (36.54s)

                                                
                                    
x
+
TestCertExpiration (243.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-733799 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-733799 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.089254691s)
E1018 10:28:09.281456  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:28:26.202969  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-733799 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.786868316s)
helpers_test.go:175: Cleaning up "cert-expiration-733799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-733799
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-733799: (4.39368269s)
--- PASS: TestCertExpiration (243.27s)

                                                
                                    
x
+
TestForceSystemdFlag (38.55s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-825845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-825845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.547753218s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-825845 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-825845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-825845
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-825845: (2.633432973s)
--- PASS: TestForceSystemdFlag (38.55s)

                                                
                                    
x
+
TestForceSystemdEnv (47.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-360583 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-360583 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.255522877s)
helpers_test.go:175: Cleaning up "force-systemd-env-360583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-360583
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-360583: (2.981097661s)
--- PASS: TestForceSystemdEnv (47.24s)

                                                
                                    
x
+
TestErrorSpam/setup (34.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-889800 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-889800 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-889800 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-889800 --driver=docker  --container-runtime=crio: (34.037463737s)
--- PASS: TestErrorSpam/setup (34.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (6.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 pause: exit status 80 (1.95869718s)

                                                
                                                
-- stdout --
	* Pausing node nospam-889800 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:37:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 pause: exit status 80 (2.275891248s)

                                                
                                                
-- stdout --
	* Pausing node nospam-889800 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:37:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 pause: exit status 80 (2.366479337s)

                                                
                                                
-- stdout --
	* Pausing node nospam-889800 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:37:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.25s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 unpause: exit status 80 (1.743283559s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-889800 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:37:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 unpause: exit status 80 (1.816952665s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-889800 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:37:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 unpause: exit status 80 (1.687510727s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-889800 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:37:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.25s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 stop: (1.320813824s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-889800 --log_dir /tmp/nospam-889800 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21764-293333/.minikube/files/etc/test/nested/copy/295193/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-679784 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1018 09:38:26.214901  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:26.221253  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:26.232559  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:26.253900  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:26.295228  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:26.376641  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:26.538149  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:26.859822  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:27.501844  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:28.783183  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:31.345297  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:36.467345  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:38:46.709417  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:39:07.190832  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-679784 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.79325889s)
--- PASS: TestFunctional/serial/StartWithProxy (81.79s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 09:39:27.179002  295193 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-679784 --alsologtostderr -v=8
E1018 09:39:48.153065  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-679784 --alsologtostderr -v=8: (27.076516916s)
functional_test.go:678: soft start took 27.082897255s for "functional-679784" cluster.
I1018 09:39:54.255832  295193 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-679784 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 cache add registry.k8s.io/pause:3.1: (1.23268694s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 cache add registry.k8s.io/pause:3.3: (1.240406432s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 cache add registry.k8s.io/pause:latest: (1.09270382s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-679784 /tmp/TestFunctionalserialCacheCmdcacheadd_local2715761496/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 cache add minikube-local-cache-test:functional-679784
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 cache delete minikube-local-cache-test:functional-679784
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-679784
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (358.761767ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 cache reload: (1.156727824s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 kubectl -- --context functional-679784 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-679784 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-679784 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-679784 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.635421705s)
functional_test.go:776: restart took 52.635528039s for "functional-679784" cluster.
I1018 09:40:54.723917  295193 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (52.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-679784 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 logs: (1.477719084s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 logs --file /tmp/TestFunctionalserialLogsFileCmd3243176330/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 logs --file /tmp/TestFunctionalserialLogsFileCmd3243176330/001/logs.txt: (1.46403881s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-679784 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-679784
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-679784: exit status 115 (392.745429ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32428 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-679784 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 config get cpus: exit status 14 (71.526328ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 config get cpus: exit status 14 (56.952743ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-679784 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-679784 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 321633: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-679784 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-679784 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (185.398628ms)

                                                
                                                
-- stdout --
	* [functional-679784] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:51:30.492259  321233 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:51:30.492438  321233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:51:30.492468  321233 out.go:374] Setting ErrFile to fd 2...
	I1018 09:51:30.492492  321233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:51:30.492767  321233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:51:30.493149  321233 out.go:368] Setting JSON to false
	I1018 09:51:30.494070  321233 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5641,"bootTime":1760775450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 09:51:30.494169  321233 start.go:141] virtualization:  
	I1018 09:51:30.497432  321233 out.go:179] * [functional-679784] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:51:30.500306  321233 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:51:30.500398  321233 notify.go:220] Checking for updates...
	I1018 09:51:30.506435  321233 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:51:30.509582  321233 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:51:30.512443  321233 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 09:51:30.515086  321233 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:51:30.517934  321233 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:51:30.521336  321233 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:51:30.521886  321233 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:51:30.551962  321233 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:51:30.552081  321233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:51:30.606720  321233 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:51:30.597928276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:51:30.606833  321233 docker.go:318] overlay module found
	I1018 09:51:30.609926  321233 out.go:179] * Using the docker driver based on existing profile
	I1018 09:51:30.612746  321233 start.go:305] selected driver: docker
	I1018 09:51:30.612763  321233 start.go:925] validating driver "docker" against &{Name:functional-679784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-679784 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:51:30.612866  321233 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:51:30.616645  321233 out.go:203] 
	W1018 09:51:30.619508  321233 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 09:51:30.622371  321233 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-679784 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-679784 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-679784 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (219.51109ms)

                                                
                                                
-- stdout --
	* [functional-679784] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:51:30.288919  321186 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:51:30.289161  321186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:51:30.289173  321186 out.go:374] Setting ErrFile to fd 2...
	I1018 09:51:30.289179  321186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:51:30.291742  321186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:51:30.292218  321186 out.go:368] Setting JSON to false
	I1018 09:51:30.293235  321186 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5641,"bootTime":1760775450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 09:51:30.293357  321186 start.go:141] virtualization:  
	I1018 09:51:30.297233  321186 out.go:179] * [functional-679784] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1018 09:51:30.301258  321186 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:51:30.301351  321186 notify.go:220] Checking for updates...
	I1018 09:51:30.308066  321186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:51:30.310955  321186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 09:51:30.313808  321186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 09:51:30.316677  321186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:51:30.319597  321186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:51:30.323061  321186 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:51:30.323683  321186 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:51:30.354341  321186 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:51:30.354459  321186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:51:30.420473  321186 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:51:30.411145649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:51:30.420584  321186 docker.go:318] overlay module found
	I1018 09:51:30.423701  321186 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 09:51:30.426528  321186 start.go:305] selected driver: docker
	I1018 09:51:30.426548  321186 start.go:925] validating driver "docker" against &{Name:functional-679784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-679784 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:51:30.426655  321186 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:51:30.430210  321186 out.go:203] 
	W1018 09:51:30.432954  321186 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 09:51:30.435772  321186 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e66fbf6a-302e-4c8e-8978-fa38d6b51354] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004308973s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-679784 get storageclass -o=json
E1018 09:41:10.075302  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-679784 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-679784 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-679784 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0be56250-2b13-4f20-b4c6-06bd9a1d11c3] Pending
helpers_test.go:352: "sp-pod" [0be56250-2b13-4f20-b4c6-06bd9a1d11c3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0be56250-2b13-4f20-b4c6-06bd9a1d11c3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004063267s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-679784 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-679784 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-679784 delete -f testdata/storage-provisioner/pod.yaml: (1.116964273s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-679784 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [73d09a5d-2560-4139-9edd-0af6cc3fd8d4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [73d09a5d-2560-4139-9edd-0af6cc3fd8d4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00289438s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-679784 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh -n functional-679784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 cp functional-679784:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4151707404/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh -n functional-679784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh -n functional-679784 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/295193/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo cat /etc/test/nested/copy/295193/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/295193.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo cat /etc/ssl/certs/295193.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/295193.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo cat /usr/share/ca-certificates/295193.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2951932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo cat /etc/ssl/certs/2951932.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2951932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo cat /usr/share/ca-certificates/2951932.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-679784 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 ssh "sudo systemctl is-active docker": exit status 1 (392.721114ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 ssh "sudo systemctl is-active containerd": exit status 1 (452.287033ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-679784 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-679784 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-679784 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-679784 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 317718: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-679784 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-679784 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [4161d7ad-babe-4da7-84f6-884fb4f64d63] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [4161d7ad-babe-4da7-84f6-884fb4f64d63] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003926242s
I1018 09:41:12.074798  295193 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-679784 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.63.238 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-679784 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "367.723161ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.124504ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "358.604485ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "60.57084ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdany-port2067351799/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760781077256545119" to /tmp/TestFunctionalparallelMountCmdany-port2067351799/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760781077256545119" to /tmp/TestFunctionalparallelMountCmdany-port2067351799/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760781077256545119" to /tmp/TestFunctionalparallelMountCmdany-port2067351799/001/test-1760781077256545119
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (364.041966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 09:51:17.625110  295193 retry.go:31] will retry after 601.081107ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 09:51 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 09:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 09:51 test-1760781077256545119
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh cat /mount-9p/test-1760781077256545119
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-679784 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [da463263-7228-497e-8756-5d49a4552a4f] Pending
helpers_test.go:352: "busybox-mount" [da463263-7228-497e-8756-5d49a4552a4f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [da463263-7228-497e-8756-5d49a4552a4f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [da463263-7228-497e-8756-5d49a4552a4f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003316976s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-679784 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdany-port2067351799/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdspecific-port2912812339/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.694639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 09:51:25.617114  295193 retry.go:31] will retry after 702.529315ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdspecific-port2912812339/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 ssh "sudo umount -f /mount-9p": exit status 1 (281.965251ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-679784 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdspecific-port2912812339/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3433722384/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3433722384/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3433722384/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T" /mount1: exit status 1 (587.415549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 09:51:27.983368  295193 retry.go:31] will retry after 256.535712ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-679784 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3433722384/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3433722384/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-679784 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3433722384/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 service list -o json: (1.399467725s)
functional_test.go:1504: Took "1.399557571s" to run "out/minikube-linux-arm64 -p functional-679784 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 version -o=json --components: (1.433260666s)
--- PASS: TestFunctional/parallel/Version/components (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-679784 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-679784 image ls --format short --alsologtostderr:
I1018 09:51:46.428800  323930 out.go:360] Setting OutFile to fd 1 ...
I1018 09:51:46.428917  323930 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:46.428922  323930 out.go:374] Setting ErrFile to fd 2...
I1018 09:51:46.428927  323930 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:46.429238  323930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
I1018 09:51:46.429853  323930 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:46.429967  323930 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:46.430466  323930 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
I1018 09:51:46.456450  323930 ssh_runner.go:195] Run: systemctl --version
I1018 09:51:46.456506  323930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
I1018 09:51:46.476317  323930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
I1018 09:51:46.583935  323930 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-679784 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-679784 image ls --format table --alsologtostderr:
I1018 09:51:46.696446  323998 out.go:360] Setting OutFile to fd 1 ...
I1018 09:51:46.697311  323998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:46.697326  323998 out.go:374] Setting ErrFile to fd 2...
I1018 09:51:46.697332  323998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:46.697613  323998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
I1018 09:51:46.698218  323998 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:46.698336  323998 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:46.698837  323998 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
I1018 09:51:46.729642  323998 ssh_runner.go:195] Run: systemctl --version
I1018 09:51:46.729693  323998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
I1018 09:51:46.761895  323998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
I1018 09:51:46.880454  323998 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-679784 image ls --format json --alsologtostderr:
[{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa
13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minik
ube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry
.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c9750
0f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"2460
70"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-679784 image ls --format json --alsologtostderr:
I1018 09:51:46.446486  323931 out.go:360] Setting OutFile to fd 1 ...
I1018 09:51:46.446831  323931 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:46.446864  323931 out.go:374] Setting ErrFile to fd 2...
I1018 09:51:46.446884  323931 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:46.447189  323931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
I1018 09:51:46.449384  323931 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:46.449540  323931 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:46.450004  323931 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
I1018 09:51:46.474037  323931 ssh_runner.go:195] Run: systemctl --version
I1018 09:51:46.474091  323931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
I1018 09:51:46.497360  323931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
I1018 09:51:46.614771  323931 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-679784 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-679784 image ls --format yaml --alsologtostderr:
I1018 09:51:46.990945  324095 out.go:360] Setting OutFile to fd 1 ...
I1018 09:51:46.991167  324095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:46.991196  324095 out.go:374] Setting ErrFile to fd 2...
I1018 09:51:46.991214  324095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:46.991498  324095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
I1018 09:51:46.992141  324095 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:46.992303  324095 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:46.992956  324095 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
I1018 09:51:47.013976  324095 ssh_runner.go:195] Run: systemctl --version
I1018 09:51:47.014032  324095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
I1018 09:51:47.042639  324095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
I1018 09:51:47.160120  324095 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-679784 ssh pgrep buildkitd: exit status 1 (345.271237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image build -t localhost/my-image:functional-679784 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-679784 image build -t localhost/my-image:functional-679784 testdata/build --alsologtostderr: (3.349871309s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-679784 image build -t localhost/my-image:functional-679784 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d34e0afd8c5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-679784
--> 0e0a1550b0f
Successfully tagged localhost/my-image:functional-679784
0e0a1550b0f12a012243333c8bb5b7c923feb237e3710aedfb16b4c35cc9f6b7
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-679784 image build -t localhost/my-image:functional-679784 testdata/build --alsologtostderr:
I1018 09:51:47.088659  324115 out.go:360] Setting OutFile to fd 1 ...
I1018 09:51:47.089487  324115 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:47.089526  324115 out.go:374] Setting ErrFile to fd 2...
I1018 09:51:47.089547  324115 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:51:47.089955  324115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
I1018 09:51:47.091307  324115 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:47.091981  324115 config.go:182] Loaded profile config "functional-679784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:51:47.092684  324115 cli_runner.go:164] Run: docker container inspect functional-679784 --format={{.State.Status}}
I1018 09:51:47.115208  324115 ssh_runner.go:195] Run: systemctl --version
I1018 09:51:47.115269  324115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679784
I1018 09:51:47.134084  324115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/functional-679784/id_rsa Username:docker}
I1018 09:51:47.247697  324115 build_images.go:161] Building image from path: /tmp/build.1196586845.tar
I1018 09:51:47.247787  324115 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 09:51:47.255716  324115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1196586845.tar
I1018 09:51:47.259429  324115 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1196586845.tar: stat -c "%s %y" /var/lib/minikube/build/build.1196586845.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1196586845.tar': No such file or directory
I1018 09:51:47.259460  324115 ssh_runner.go:362] scp /tmp/build.1196586845.tar --> /var/lib/minikube/build/build.1196586845.tar (3072 bytes)
I1018 09:51:47.277841  324115 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1196586845
I1018 09:51:47.285913  324115 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1196586845 -xf /var/lib/minikube/build/build.1196586845.tar
I1018 09:51:47.294103  324115 crio.go:315] Building image: /var/lib/minikube/build/build.1196586845
I1018 09:51:47.294189  324115 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-679784 /var/lib/minikube/build/build.1196586845 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1018 09:51:50.341103  324115 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-679784 /var/lib/minikube/build/build.1196586845 --cgroup-manager=cgroupfs: (3.046886829s)
I1018 09:51:50.341204  324115 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1196586845
I1018 09:51:50.349174  324115 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1196586845.tar
I1018 09:51:50.357062  324115 build_images.go:217] Built localhost/my-image:functional-679784 from /tmp/build.1196586845.tar
I1018 09:51:50.357090  324115 build_images.go:133] succeeded building to: functional-679784
I1018 09:51:50.357095  324115 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-679784
2025/10/18 09:51:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image rm kicbase/echo-server:functional-679784 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-679784 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.80s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-679784
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-679784
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-679784
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 09:53:26.203525  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:54:49.278006  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m20.509197571s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (201.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (37.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 kubectl -- rollout status deployment/busybox: (35.030070153s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-8dlwt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-dd9wg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-vf5lk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-8dlwt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-dd9wg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-vf5lk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-8dlwt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-dd9wg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-vf5lk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (37.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-8dlwt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-8dlwt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-dd9wg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-dd9wg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-vf5lk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 kubectl -- exec busybox-7b57f96db7-vf5lk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 node add --alsologtostderr -v 5
E1018 09:56:03.590950  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:03.598369  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:03.609771  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:03.631245  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:03.672702  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:03.754117  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:03.915667  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:04.237105  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:04.878910  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:06.160472  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:08.722921  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:13.845630  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:24.087683  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:56:44.569840  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 node add --alsologtostderr -v 5: (59.916473554s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5: (1.036402723s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-333992 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.035294676s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 status --output json --alsologtostderr -v 5: (1.030244449s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp testdata/cp-test.txt ha-333992:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1305103451/001/cp-test_ha-333992.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992:/home/docker/cp-test.txt ha-333992-m02:/home/docker/cp-test_ha-333992_ha-333992-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m02 "sudo cat /home/docker/cp-test_ha-333992_ha-333992-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992:/home/docker/cp-test.txt ha-333992-m03:/home/docker/cp-test_ha-333992_ha-333992-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m03 "sudo cat /home/docker/cp-test_ha-333992_ha-333992-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992:/home/docker/cp-test.txt ha-333992-m04:/home/docker/cp-test_ha-333992_ha-333992-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m04 "sudo cat /home/docker/cp-test_ha-333992_ha-333992-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp testdata/cp-test.txt ha-333992-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1305103451/001/cp-test_ha-333992-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m02:/home/docker/cp-test.txt ha-333992:/home/docker/cp-test_ha-333992-m02_ha-333992.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992 "sudo cat /home/docker/cp-test_ha-333992-m02_ha-333992.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m02:/home/docker/cp-test.txt ha-333992-m03:/home/docker/cp-test_ha-333992-m02_ha-333992-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m03 "sudo cat /home/docker/cp-test_ha-333992-m02_ha-333992-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m02:/home/docker/cp-test.txt ha-333992-m04:/home/docker/cp-test_ha-333992-m02_ha-333992-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m04 "sudo cat /home/docker/cp-test_ha-333992-m02_ha-333992-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp testdata/cp-test.txt ha-333992-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1305103451/001/cp-test_ha-333992-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m03:/home/docker/cp-test.txt ha-333992:/home/docker/cp-test_ha-333992-m03_ha-333992.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992 "sudo cat /home/docker/cp-test_ha-333992-m03_ha-333992.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m03:/home/docker/cp-test.txt ha-333992-m02:/home/docker/cp-test_ha-333992-m03_ha-333992-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m02 "sudo cat /home/docker/cp-test_ha-333992-m03_ha-333992-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m03:/home/docker/cp-test.txt ha-333992-m04:/home/docker/cp-test_ha-333992-m03_ha-333992-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m04 "sudo cat /home/docker/cp-test_ha-333992-m03_ha-333992-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp testdata/cp-test.txt ha-333992-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1305103451/001/cp-test_ha-333992-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m04:/home/docker/cp-test.txt ha-333992:/home/docker/cp-test_ha-333992-m04_ha-333992.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992 "sudo cat /home/docker/cp-test_ha-333992-m04_ha-333992.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m04:/home/docker/cp-test.txt ha-333992-m02:/home/docker/cp-test_ha-333992-m04_ha-333992-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m02 "sudo cat /home/docker/cp-test_ha-333992-m04_ha-333992-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 cp ha-333992-m04:/home/docker/cp-test.txt ha-333992-m03:/home/docker/cp-test_ha-333992-m04_ha-333992-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 ssh -n ha-333992-m03 "sudo cat /home/docker/cp-test_ha-333992-m04_ha-333992-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 node stop m02 --alsologtostderr -v 5
E1018 09:57:25.531188  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 node stop m02 --alsologtostderr -v 5: (12.058059538s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5: exit status 7 (813.627677ms)

                                                
                                                
-- stdout --
	ha-333992
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-333992-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-333992-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-333992-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:57:28.562805  338952 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:57:28.562990  338952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:57:28.562999  338952 out.go:374] Setting ErrFile to fd 2...
	I1018 09:57:28.563004  338952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:57:28.563303  338952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 09:57:28.563496  338952 out.go:368] Setting JSON to false
	I1018 09:57:28.563523  338952 mustload.go:65] Loading cluster: ha-333992
	I1018 09:57:28.563572  338952 notify.go:220] Checking for updates...
	I1018 09:57:28.563903  338952 config.go:182] Loaded profile config "ha-333992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:57:28.563921  338952 status.go:174] checking status of ha-333992 ...
	I1018 09:57:28.564507  338952 cli_runner.go:164] Run: docker container inspect ha-333992 --format={{.State.Status}}
	I1018 09:57:28.584304  338952 status.go:371] ha-333992 host status = "Running" (err=<nil>)
	I1018 09:57:28.584328  338952 host.go:66] Checking if "ha-333992" exists ...
	I1018 09:57:28.584640  338952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-333992
	I1018 09:57:28.616058  338952 host.go:66] Checking if "ha-333992" exists ...
	I1018 09:57:28.616495  338952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:57:28.616548  338952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-333992
	I1018 09:57:28.648223  338952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/ha-333992/id_rsa Username:docker}
	I1018 09:57:28.758422  338952 ssh_runner.go:195] Run: systemctl --version
	I1018 09:57:28.768645  338952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:57:28.783372  338952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:57:28.859735  338952 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-18 09:57:28.849980829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:57:28.860343  338952 kubeconfig.go:125] found "ha-333992" server: "https://192.168.49.254:8443"
	I1018 09:57:28.860377  338952 api_server.go:166] Checking apiserver status ...
	I1018 09:57:28.860423  338952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:57:28.873081  338952 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	I1018 09:57:28.881991  338952 api_server.go:182] apiserver freezer: "5:freezer:/docker/009606d5eb6c314f2516e44c0253ca6f249704d77b9e7fa7c82d2f0ceab61d76/crio/crio-bbe0c2bb56e6ffd23e8daf137d28499f92a93b5bda19ec0e7e5360e3d8a617be"
	I1018 09:57:28.882078  338952 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/009606d5eb6c314f2516e44c0253ca6f249704d77b9e7fa7c82d2f0ceab61d76/crio/crio-bbe0c2bb56e6ffd23e8daf137d28499f92a93b5bda19ec0e7e5360e3d8a617be/freezer.state
	I1018 09:57:28.889996  338952 api_server.go:204] freezer state: "THAWED"
	I1018 09:57:28.890025  338952 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 09:57:28.898908  338952 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 09:57:28.898937  338952 status.go:463] ha-333992 apiserver status = Running (err=<nil>)
	I1018 09:57:28.898948  338952 status.go:176] ha-333992 status: &{Name:ha-333992 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:57:28.898967  338952 status.go:174] checking status of ha-333992-m02 ...
	I1018 09:57:28.899275  338952 cli_runner.go:164] Run: docker container inspect ha-333992-m02 --format={{.State.Status}}
	I1018 09:57:28.919992  338952 status.go:371] ha-333992-m02 host status = "Stopped" (err=<nil>)
	I1018 09:57:28.920017  338952 status.go:384] host is not running, skipping remaining checks
	I1018 09:57:28.920025  338952 status.go:176] ha-333992-m02 status: &{Name:ha-333992-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:57:28.920059  338952 status.go:174] checking status of ha-333992-m03 ...
	I1018 09:57:28.920583  338952 cli_runner.go:164] Run: docker container inspect ha-333992-m03 --format={{.State.Status}}
	I1018 09:57:28.938914  338952 status.go:371] ha-333992-m03 host status = "Running" (err=<nil>)
	I1018 09:57:28.938938  338952 host.go:66] Checking if "ha-333992-m03" exists ...
	I1018 09:57:28.939237  338952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-333992-m03
	I1018 09:57:28.956449  338952 host.go:66] Checking if "ha-333992-m03" exists ...
	I1018 09:57:28.956752  338952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:57:28.956793  338952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-333992-m03
	I1018 09:57:28.973688  338952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/ha-333992-m03/id_rsa Username:docker}
	I1018 09:57:29.082906  338952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:57:29.097568  338952 kubeconfig.go:125] found "ha-333992" server: "https://192.168.49.254:8443"
	I1018 09:57:29.097600  338952 api_server.go:166] Checking apiserver status ...
	I1018 09:57:29.097665  338952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:57:29.110151  338952 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	I1018 09:57:29.118904  338952 api_server.go:182] apiserver freezer: "5:freezer:/docker/e84c8f23e08fec9f09325fa23c92b9d1146460ae03ba9d7fc8681e58f06180de/crio/crio-6cfa1c488692b515a6e6567ca48e2473f1cb07fd04d58c233c71abaef95acc03"
	I1018 09:57:29.118992  338952 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e84c8f23e08fec9f09325fa23c92b9d1146460ae03ba9d7fc8681e58f06180de/crio/crio-6cfa1c488692b515a6e6567ca48e2473f1cb07fd04d58c233c71abaef95acc03/freezer.state
	I1018 09:57:29.126890  338952 api_server.go:204] freezer state: "THAWED"
	I1018 09:57:29.126934  338952 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 09:57:29.135287  338952 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 09:57:29.135315  338952 status.go:463] ha-333992-m03 apiserver status = Running (err=<nil>)
	I1018 09:57:29.135353  338952 status.go:176] ha-333992-m03 status: &{Name:ha-333992-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:57:29.135377  338952 status.go:174] checking status of ha-333992-m04 ...
	I1018 09:57:29.135731  338952 cli_runner.go:164] Run: docker container inspect ha-333992-m04 --format={{.State.Status}}
	I1018 09:57:29.157748  338952 status.go:371] ha-333992-m04 host status = "Running" (err=<nil>)
	I1018 09:57:29.157774  338952 host.go:66] Checking if "ha-333992-m04" exists ...
	I1018 09:57:29.158077  338952 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-333992-m04
	I1018 09:57:29.175056  338952 host.go:66] Checking if "ha-333992-m04" exists ...
	I1018 09:57:29.175353  338952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:57:29.175395  338952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-333992-m04
	I1018 09:57:29.198647  338952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/ha-333992-m04/id_rsa Username:docker}
	I1018 09:57:29.306588  338952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:57:29.323211  338952 status.go:176] ha-333992-m04 status: &{Name:ha-333992-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 node start m02 --alsologtostderr -v 5: (27.555228168s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5: (1.156269554s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.304986532s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (210.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 stop --alsologtostderr -v 5
E1018 09:58:26.203089  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 stop --alsologtostderr -v 5: (27.091843137s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 start --wait true --alsologtostderr -v 5
E1018 09:58:47.453350  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:01:03.591626  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 start --wait true --alsologtostderr -v 5: (3m3.0569957s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (210.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 node delete m03 --alsologtostderr -v 5
E1018 10:01:31.294816  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 node delete m03 --alsologtostderr -v 5: (11.252650453s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 stop --alsologtostderr -v 5: (36.121708895s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5: exit status 7 (104.251784ms)

                                                
                                                
-- stdout --
	ha-333992
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-333992-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-333992-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:02:19.848141  350703 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:02:19.848277  350703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:02:19.848288  350703 out.go:374] Setting ErrFile to fd 2...
	I1018 10:02:19.848294  350703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:02:19.848640  350703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:02:19.848856  350703 out.go:368] Setting JSON to false
	I1018 10:02:19.848886  350703 mustload.go:65] Loading cluster: ha-333992
	I1018 10:02:19.849591  350703 config.go:182] Loaded profile config "ha-333992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:02:19.849610  350703 status.go:174] checking status of ha-333992 ...
	I1018 10:02:19.850290  350703 cli_runner.go:164] Run: docker container inspect ha-333992 --format={{.State.Status}}
	I1018 10:02:19.850958  350703 notify.go:220] Checking for updates...
	I1018 10:02:19.868142  350703 status.go:371] ha-333992 host status = "Stopped" (err=<nil>)
	I1018 10:02:19.868173  350703 status.go:384] host is not running, skipping remaining checks
	I1018 10:02:19.868181  350703 status.go:176] ha-333992 status: &{Name:ha-333992 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 10:02:19.868206  350703 status.go:174] checking status of ha-333992-m02 ...
	I1018 10:02:19.868506  350703 cli_runner.go:164] Run: docker container inspect ha-333992-m02 --format={{.State.Status}}
	I1018 10:02:19.886868  350703 status.go:371] ha-333992-m02 host status = "Stopped" (err=<nil>)
	I1018 10:02:19.886892  350703 status.go:384] host is not running, skipping remaining checks
	I1018 10:02:19.886898  350703 status.go:176] ha-333992-m02 status: &{Name:ha-333992-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 10:02:19.886917  350703 status.go:174] checking status of ha-333992-m04 ...
	I1018 10:02:19.887206  350703 cli_runner.go:164] Run: docker container inspect ha-333992-m04 --format={{.State.Status}}
	I1018 10:02:19.904321  350703 status.go:371] ha-333992-m04 host status = "Stopped" (err=<nil>)
	I1018 10:02:19.904345  350703 status.go:384] host is not running, skipping remaining checks
	I1018 10:02:19.904353  350703 status.go:176] ha-333992-m04 status: &{Name:ha-333992-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (69.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 10:03:26.203188  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m8.051508014s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (69.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 node add --control-plane --alsologtostderr -v 5: (1m17.472688385s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-333992 status --alsologtostderr -v 5: (1.069313479s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.081699027s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (86.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-604405 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1018 10:06:03.596820  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-604405 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m26.597645942s)
--- PASS: TestJSONOutput/start/Command (86.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-604405 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-604405 --output=json --user=testUser: (5.816520851s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-221239 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-221239 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (101.45525ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"12890d5d-6f25-4d1e-b372-f2b450be3185","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-221239] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d6bf82c-a12b-4c89-9856-bb7186b49b3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21764"}}
	{"specversion":"1.0","id":"7aa53225-617a-4fd1-8436-92b497f9beab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a95cf6cd-32f6-41e9-94f2-e1f256950e57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig"}}
	{"specversion":"1.0","id":"71b0a573-db73-4620-b6c3-aff631b3f70a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube"}}
	{"specversion":"1.0","id":"27af4dce-fd22-4ccf-a0cd-4f7e49c43839","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"00fd40fd-d1ad-4769-996b-58f12b4e4ec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7223d4f9-bd1d-4bfa-b5b6-c9737043f9e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-221239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-221239
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-273595 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-273595 --network=: (35.418481025s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-273595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-273595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-273595: (2.080901866s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.52s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-126273 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-126273 --network=bridge: (34.575025784s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-126273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-126273
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-126273: (2.123804732s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.72s)

                                                
                                    
x
+
TestKicExistingNetwork (36.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 10:07:54.637793  295193 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 10:07:54.654332  295193 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 10:07:54.655530  295193 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 10:07:54.655569  295193 cli_runner.go:164] Run: docker network inspect existing-network
W1018 10:07:54.671662  295193 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 10:07:54.671696  295193 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 10:07:54.671710  295193 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 10:07:54.671831  295193 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 10:07:54.689089  295193 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-57e2bd20fa2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:61:d0:06:18:0c} reservation:<nil>}
I1018 10:07:54.693047  295193 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1018 10:07:54.693465  295193 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400166c860}
I1018 10:07:54.694058  295193 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1018 10:07:54.694138  295193 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 10:07:54.753892  295193 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-914451 --network=existing-network
E1018 10:08:26.202970  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-914451 --network=existing-network: (33.982777242s)
helpers_test.go:175: Cleaning up "existing-network-914451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-914451
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-914451: (2.133925702s)
I1018 10:08:30.888018  295193 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.27s)

                                                
                                    
x
+
TestKicCustomSubnet (38.26s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-727619 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-727619 --subnet=192.168.60.0/24: (36.035903735s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-727619 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-727619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-727619
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-727619: (2.195331703s)
--- PASS: TestKicCustomSubnet (38.26s)

                                                
                                    
x
+
TestKicStaticIP (34.53s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-209301 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-209301 --static-ip=192.168.200.200: (32.154658911s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-209301 ip
helpers_test.go:175: Cleaning up "static-ip-209301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-209301
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-209301: (2.209710234s)
--- PASS: TestKicStaticIP (34.53s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.63s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-177214 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-177214 --driver=docker  --container-runtime=crio: (30.613095368s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-179750 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-179750 --driver=docker  --container-runtime=crio: (36.428214235s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-177214
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-179750
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-179750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-179750
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-179750: (2.091414767s)
helpers_test.go:175: Cleaning up "first-177214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-177214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-177214: (2.046912097s)
--- PASS: TestMinikubeProfile (72.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-894993 --memory=3072 --mount-string /tmp/TestMountStartserial2509685570/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-894993 --memory=3072 --mount-string /tmp/TestMountStartserial2509685570/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.225321216s)
E1018 10:11:03.591264  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/StartWithMountFirst (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-894993 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-896831 --memory=3072 --mount-string /tmp/TestMountStartserial2509685570/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-896831 --memory=3072 --mount-string /tmp/TestMountStartserial2509685570/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.203867179s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-896831 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-894993 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-894993 --alsologtostderr -v=5: (1.740898585s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-896831 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-896831
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-896831: (1.286691812s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-896831
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-896831: (7.279140722s)
--- PASS: TestMountStart/serial/RestartStopped (8.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-896831 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-710351 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 10:11:29.279336  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:12:26.656789  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:13:26.203081  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-710351 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.480165209s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-710351 -- rollout status deployment/busybox: (3.293296129s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-m2q5b -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-sntns -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-m2q5b -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-sntns -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-m2q5b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-sntns -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-m2q5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-m2q5b -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-sntns -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-710351 -- exec busybox-7b57f96db7-sntns -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-710351 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-710351 -v=5 --alsologtostderr: (55.720412714s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.45s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-710351 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp testdata/cp-test.txt multinode-710351:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp multinode-710351:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2229149457/001/cp-test_multinode-710351.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp multinode-710351:/home/docker/cp-test.txt multinode-710351-m02:/home/docker/cp-test_multinode-710351_multinode-710351-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m02 "sudo cat /home/docker/cp-test_multinode-710351_multinode-710351-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp multinode-710351:/home/docker/cp-test.txt multinode-710351-m03:/home/docker/cp-test_multinode-710351_multinode-710351-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m03 "sudo cat /home/docker/cp-test_multinode-710351_multinode-710351-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp testdata/cp-test.txt multinode-710351-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp multinode-710351-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2229149457/001/cp-test_multinode-710351-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp multinode-710351-m02:/home/docker/cp-test.txt multinode-710351:/home/docker/cp-test_multinode-710351-m02_multinode-710351.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351 "sudo cat /home/docker/cp-test_multinode-710351-m02_multinode-710351.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp multinode-710351-m02:/home/docker/cp-test.txt multinode-710351-m03:/home/docker/cp-test_multinode-710351-m02_multinode-710351-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m03 "sudo cat /home/docker/cp-test_multinode-710351-m02_multinode-710351-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp testdata/cp-test.txt multinode-710351-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp multinode-710351-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2229149457/001/cp-test_multinode-710351-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp multinode-710351-m03:/home/docker/cp-test.txt multinode-710351:/home/docker/cp-test_multinode-710351-m03_multinode-710351.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351 "sudo cat /home/docker/cp-test_multinode-710351-m03_multinode-710351.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 cp multinode-710351-m03:/home/docker/cp-test.txt multinode-710351-m02:/home/docker/cp-test_multinode-710351-m03_multinode-710351-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 ssh -n multinode-710351-m02 "sudo cat /home/docker/cp-test_multinode-710351-m03_multinode-710351-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-710351 node stop m03: (1.517306706s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-710351 status: exit status 7 (553.697997ms)

                                                
                                                
-- stdout --
	multinode-710351
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-710351-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-710351-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-710351 status --alsologtostderr: exit status 7 (548.321107ms)

                                                
                                                
-- stdout --
	multinode-710351
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-710351-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-710351-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:15:02.004288  401116 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:15:02.004535  401116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:15:02.004567  401116 out.go:374] Setting ErrFile to fd 2...
	I1018 10:15:02.004686  401116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:15:02.005177  401116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:15:02.005446  401116 out.go:368] Setting JSON to false
	I1018 10:15:02.005490  401116 mustload.go:65] Loading cluster: multinode-710351
	I1018 10:15:02.005565  401116 notify.go:220] Checking for updates...
	I1018 10:15:02.005958  401116 config.go:182] Loaded profile config "multinode-710351": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:15:02.005976  401116 status.go:174] checking status of multinode-710351 ...
	I1018 10:15:02.006504  401116 cli_runner.go:164] Run: docker container inspect multinode-710351 --format={{.State.Status}}
	I1018 10:15:02.030610  401116 status.go:371] multinode-710351 host status = "Running" (err=<nil>)
	I1018 10:15:02.030644  401116 host.go:66] Checking if "multinode-710351" exists ...
	I1018 10:15:02.030986  401116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-710351
	I1018 10:15:02.055319  401116 host.go:66] Checking if "multinode-710351" exists ...
	I1018 10:15:02.055668  401116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:15:02.055708  401116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-710351
	I1018 10:15:02.076061  401116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/multinode-710351/id_rsa Username:docker}
	I1018 10:15:02.178498  401116 ssh_runner.go:195] Run: systemctl --version
	I1018 10:15:02.184774  401116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:15:02.197855  401116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:15:02.262477  401116 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 10:15:02.252580038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:15:02.263049  401116 kubeconfig.go:125] found "multinode-710351" server: "https://192.168.58.2:8443"
	I1018 10:15:02.263082  401116 api_server.go:166] Checking apiserver status ...
	I1018 10:15:02.263133  401116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 10:15:02.275010  401116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1242/cgroup
	I1018 10:15:02.283811  401116 api_server.go:182] apiserver freezer: "5:freezer:/docker/a45e640f4f2bd3e3fc64d64238b02e184dc6942f7f7bcbfde951d4131af3a342/crio/crio-57163633a8f41b15d136dd89d656c7f5d0475c7678553d2f5cd15948bfcd63f1"
	I1018 10:15:02.283878  401116 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a45e640f4f2bd3e3fc64d64238b02e184dc6942f7f7bcbfde951d4131af3a342/crio/crio-57163633a8f41b15d136dd89d656c7f5d0475c7678553d2f5cd15948bfcd63f1/freezer.state
	I1018 10:15:02.291715  401116 api_server.go:204] freezer state: "THAWED"
	I1018 10:15:02.291742  401116 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1018 10:15:02.300047  401116 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1018 10:15:02.300078  401116 status.go:463] multinode-710351 apiserver status = Running (err=<nil>)
	I1018 10:15:02.300089  401116 status.go:176] multinode-710351 status: &{Name:multinode-710351 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 10:15:02.300139  401116 status.go:174] checking status of multinode-710351-m02 ...
	I1018 10:15:02.300477  401116 cli_runner.go:164] Run: docker container inspect multinode-710351-m02 --format={{.State.Status}}
	I1018 10:15:02.318260  401116 status.go:371] multinode-710351-m02 host status = "Running" (err=<nil>)
	I1018 10:15:02.318284  401116 host.go:66] Checking if "multinode-710351-m02" exists ...
	I1018 10:15:02.318589  401116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-710351-m02
	I1018 10:15:02.345000  401116 host.go:66] Checking if "multinode-710351-m02" exists ...
	I1018 10:15:02.345344  401116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 10:15:02.345388  401116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-710351-m02
	I1018 10:15:02.363405  401116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21764-293333/.minikube/machines/multinode-710351-m02/id_rsa Username:docker}
	I1018 10:15:02.466559  401116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 10:15:02.478968  401116 status.go:176] multinode-710351-m02 status: &{Name:multinode-710351-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 10:15:02.479001  401116 status.go:174] checking status of multinode-710351-m03 ...
	I1018 10:15:02.479301  401116 cli_runner.go:164] Run: docker container inspect multinode-710351-m03 --format={{.State.Status}}
	I1018 10:15:02.496164  401116 status.go:371] multinode-710351-m03 host status = "Stopped" (err=<nil>)
	I1018 10:15:02.496203  401116 status.go:384] host is not running, skipping remaining checks
	I1018 10:15:02.496210  401116 status.go:176] multinode-710351-m03 status: &{Name:multinode-710351-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.62s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-710351 node start m03 -v=5 --alsologtostderr: (7.559278129s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-710351
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-710351
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-710351: (25.033349936s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-710351 --wait=true -v=5 --alsologtostderr
E1018 10:16:03.591874  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-710351 --wait=true -v=5 --alsologtostderr: (51.246520715s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-710351
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-710351 node delete m03: (4.992048749s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-710351 stop: (23.823500639s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-710351 status: exit status 7 (94.208489ms)

                                                
                                                
-- stdout --
	multinode-710351
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-710351-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-710351 status --alsologtostderr: exit status 7 (93.380671ms)

                                                
                                                
-- stdout --
	multinode-710351
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-710351-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:16:56.923430  408899 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:16:56.923636  408899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:16:56.923668  408899 out.go:374] Setting ErrFile to fd 2...
	I1018 10:16:56.923690  408899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:16:56.923986  408899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:16:56.924238  408899 out.go:368] Setting JSON to false
	I1018 10:16:56.924301  408899 mustload.go:65] Loading cluster: multinode-710351
	I1018 10:16:56.924768  408899 config.go:182] Loaded profile config "multinode-710351": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:16:56.924824  408899 status.go:174] checking status of multinode-710351 ...
	I1018 10:16:56.924344  408899 notify.go:220] Checking for updates...
	I1018 10:16:56.926456  408899 cli_runner.go:164] Run: docker container inspect multinode-710351 --format={{.State.Status}}
	I1018 10:16:56.944193  408899 status.go:371] multinode-710351 host status = "Stopped" (err=<nil>)
	I1018 10:16:56.944216  408899 status.go:384] host is not running, skipping remaining checks
	I1018 10:16:56.944223  408899 status.go:176] multinode-710351 status: &{Name:multinode-710351 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 10:16:56.944249  408899 status.go:174] checking status of multinode-710351-m02 ...
	I1018 10:16:56.944561  408899 cli_runner.go:164] Run: docker container inspect multinode-710351-m02 --format={{.State.Status}}
	I1018 10:16:56.966814  408899 status.go:371] multinode-710351-m02 host status = "Stopped" (err=<nil>)
	I1018 10:16:56.966842  408899 status.go:384] host is not running, skipping remaining checks
	I1018 10:16:56.966861  408899 status.go:176] multinode-710351-m02 status: &{Name:multinode-710351-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-710351 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-710351 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (55.495272054s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-710351 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-710351
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-710351-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-710351-m02 --driver=docker  --container-runtime=crio: exit status 14 (90.327068ms)

                                                
                                                
-- stdout --
	* [multinode-710351-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-710351-m02' is duplicated with machine name 'multinode-710351-m02' in profile 'multinode-710351'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-710351-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-710351-m03 --driver=docker  --container-runtime=crio: (32.570554549s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-710351
E1018 10:18:26.203240  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-710351: exit status 80 (353.047727ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-710351 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-710351-m03 already exists in multinode-710351-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-710351-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-710351-m03: (2.147715605s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.21s)

                                                
                                    
x
+
TestPreload (123.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-968310 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-968310 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.361436098s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-968310 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-968310 image pull gcr.io/k8s-minikube/busybox: (2.158899221s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-968310
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-968310: (5.956275643s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-968310 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-968310 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.915876389s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-968310 image list
helpers_test.go:175: Cleaning up "test-preload-968310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-968310
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-968310: (2.451472339s)
--- PASS: TestPreload (123.09s)

                                                
                                    
x
+
TestInsufficientStorage (13.33s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-971499 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-971499 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.767913912s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9f1e765c-6eb8-4b39-af60-5db1293b64ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-971499] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec4abf41-e176-4acb-9d5b-97d93d10f202","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21764"}}
	{"specversion":"1.0","id":"3d84cdd7-59bb-429c-9b89-a61c664a181c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6d76a88d-87bb-4966-ae27-0d6eba50a89f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig"}}
	{"specversion":"1.0","id":"0835c36a-0ef1-47b5-b457-627c9cf3d7cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube"}}
	{"specversion":"1.0","id":"9bbb38d1-603e-4720-a514-66111d0b1482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"257f35fe-43dc-439c-b768-8c63c9c13ddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"940e1d91-3d54-4074-8325-f4a75ca6c549","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"36c3a945-ba98-4c57-9cbb-572e295852ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9d8b6e6c-30cb-4bc3-b76c-1dbc50846fbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"52edf478-fc8b-4f4c-91a7-e79e69ef06b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"455ba963-1529-4a8d-916c-d392e3eea5b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-971499\" primary control-plane node in \"insufficient-storage-971499\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c0975b8-b573-4c6d-b69c-2e80740f2e22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"257a8a88-188d-4603-893f-bb2fad346e36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5aac9b3d-30e8-444e-b3ef-5c2231aadc4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-971499 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-971499 --output=json --layout=cluster: exit status 7 (295.187022ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-971499","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-971499","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 10:21:27.177974  424925 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-971499" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-971499 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-971499 --output=json --layout=cluster: exit status 7 (307.905835ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-971499","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-971499","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 10:21:27.487149  424992 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-971499" does not appear in /home/jenkins/minikube-integration/21764-293333/kubeconfig
	E1018 10:21:27.497289  424992 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/insufficient-storage-971499/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-971499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-971499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-971499: (1.956894516s)
--- PASS: TestInsufficientStorage (13.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2765378470 start -p running-upgrade-522749 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1018 10:26:03.594090  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2765378470 start -p running-upgrade-522749 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.898591635s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-522749 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-522749 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.987184722s)
helpers_test.go:175: Cleaning up "running-upgrade-522749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-522749
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-522749: (2.0487191s)
--- PASS: TestRunningBinaryUpgrade (54.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (208.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-297181 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-297181 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.729227875s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-297181
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-297181: (1.489623872s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-297181 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-297181 status --format={{.Host}}: exit status 7 (136.60325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-297181 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-297181 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m1.244701869s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-297181 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-297181 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-297181 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (123.758334ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-297181] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-297181
	    minikube start -p kubernetes-upgrade-297181 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2971812 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-297181 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-297181 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-297181 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.991067736s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-297181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-297181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-297181: (2.286966252s)
--- PASS: TestKubernetesUpgrade (208.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1314219481 start -p missing-upgrade-495276 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1314219481 start -p missing-upgrade-495276 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.900227482s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-495276
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-495276: (1.343784958s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-495276
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-495276 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-495276 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.867132189s)
helpers_test.go:175: Cleaning up "missing-upgrade-495276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-495276
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-495276: (2.455783962s)
--- PASS: TestMissingContainerUpgrade (120.33s)

                                                
                                    
x
+
TestPause/serial/Start (90.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-019243 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-019243 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m30.521173274s)
--- PASS: TestPause/serial/Start (90.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-403599 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-403599 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (123.127734ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-403599] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-403599 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-403599 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.534534787s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-403599 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-403599 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-403599 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.099780602s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-403599 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-403599 status -o json: exit status 2 (313.764368ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-403599","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-403599
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-403599: (2.019210043s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-403599 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-403599 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.631241335s)
--- PASS: TestNoKubernetes/serial/Start (9.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-403599 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-403599 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.945149ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-403599
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-403599: (1.304102879s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-403599 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-403599 --driver=docker  --container-runtime=crio: (7.032322894s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-403599 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-403599 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.324379ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-019243 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1018 10:23:26.202964  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-019243 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.919953958s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.496347595 start -p stopped-upgrade-186410 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.496347595 start -p stopped-upgrade-186410 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.444994455s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.496347595 -p stopped-upgrade-186410 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.496347595 -p stopped-upgrade-186410 stop: (1.242484257s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-186410 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-186410 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.353730871s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-186410
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-186410: (1.18350102s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-881658 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-881658 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (184.523533ms)

                                                
                                                
-- stdout --
	* [false-881658] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 10:27:17.240753  458547 out.go:360] Setting OutFile to fd 1 ...
	I1018 10:27:17.240921  458547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:27:17.240952  458547 out.go:374] Setting ErrFile to fd 2...
	I1018 10:27:17.240973  458547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 10:27:17.241270  458547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-293333/.minikube/bin
	I1018 10:27:17.241710  458547 out.go:368] Setting JSON to false
	I1018 10:27:17.242627  458547 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7788,"bootTime":1760775450,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1018 10:27:17.242720  458547 start.go:141] virtualization:  
	I1018 10:27:17.246434  458547 out.go:179] * [false-881658] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 10:27:17.250344  458547 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 10:27:17.250435  458547 notify.go:220] Checking for updates...
	I1018 10:27:17.256100  458547 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 10:27:17.259013  458547 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-293333/kubeconfig
	I1018 10:27:17.262013  458547 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-293333/.minikube
	I1018 10:27:17.264855  458547 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 10:27:17.267751  458547 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 10:27:17.271153  458547 config.go:182] Loaded profile config "force-systemd-flag-825845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 10:27:17.271296  458547 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 10:27:17.294588  458547 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 10:27:17.294724  458547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 10:27:17.359844  458547 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 10:27:17.350422678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 10:27:17.359957  458547 docker.go:318] overlay module found
	I1018 10:27:17.363030  458547 out.go:179] * Using the docker driver based on user configuration
	I1018 10:27:17.365813  458547 start.go:305] selected driver: docker
	I1018 10:27:17.365831  458547 start.go:925] validating driver "docker" against <nil>
	I1018 10:27:17.365846  458547 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 10:27:17.369304  458547 out.go:203] 
	W1018 10:27:17.372143  458547 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 10:27:17.374901  458547 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-881658 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-881658" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-881658

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-881658"

                                                
                                                
----------------------- debugLogs end: false-881658 [took: 4.33049475s] --------------------------------
helpers_test.go:175: Cleaning up "false-881658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-881658
--- PASS: TestNetworkPlugins/group/false (4.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1018 10:29:06.658172  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.601374395s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-309062 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7e943026-3e85-454f-a324-37c76beb91b8] Pending
helpers_test.go:352: "busybox" [7e943026-3e85-454f-a324-37c76beb91b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7e943026-3e85-454f-a324-37c76beb91b8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00328126s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-309062 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-309062 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-309062 --alsologtostderr -v=3: (11.984253329s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309062 -n old-k8s-version-309062
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309062 -n old-k8s-version-309062: exit status 7 (77.512737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-309062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-309062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.273615008s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309062 -n old-k8s-version-309062
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gt5x2" [fa2d2419-2697-4b0f-8b80-c51fb742e12c] Running
E1018 10:31:03.591342  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004417757s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gt5x2" [fa2d2419-2697-4b0f-8b80-c51fb742e12c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003429639s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-309062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-309062 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m27.776670826s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m27.316177962s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-715182 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6a6ba823-a995-4243-bfa2-29e841489887] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6a6ba823-a995-4243-bfa2-29e841489887] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004274138s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-715182 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-101897 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [64611957-693c-42db-b15e-d2ca4cdf6692] Pending
helpers_test.go:352: "busybox" [64611957-693c-42db-b15e-d2ca4cdf6692] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [64611957-693c-42db-b15e-d2ca4cdf6692] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00349339s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-101897 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-715182 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-715182 --alsologtostderr -v=3: (11.956369155s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-101897 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-101897 --alsologtostderr -v=3: (12.00523343s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182: exit status 7 (67.634874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-715182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-715182 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.344581627s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-715182 -n default-k8s-diff-port-715182
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-101897 -n embed-certs-101897
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-101897 -n embed-certs-101897: exit status 7 (86.85465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-101897 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (63.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 10:33:26.203593  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-101897 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.484186865s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-101897 -n embed-certs-101897
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (63.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jqgfc" [c554b1db-a745-4da6-9d1f-3d4e2759b03e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002989743s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jqgfc" [c554b1db-a745-4da6-9d1f-3d4e2759b03e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002974749s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-715182 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-715182 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7tlh9" [ab31733e-962a-4dd9-9b6f-78be82a1d96b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004531519s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.225562087s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7tlh9" [ab31733e-962a-4dd9-9b6f-78be82a1d96b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003789166s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-101897 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-101897 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 10:34:54.614302  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:35:04.855894  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:35:25.338047  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.31746843s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-577403 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-577403 --alsologtostderr -v=3: (1.612425026s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-577403 -n newest-cni-577403
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-577403 -n newest-cni-577403: exit status 7 (74.651159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-577403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-577403 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.442794997s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-577403 -n newest-cni-577403
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-027087 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [df688ec3-f32c-4bdb-8846-fe0eeaff3436] Pending
helpers_test.go:352: "busybox" [df688ec3-f32c-4bdb-8846-fe0eeaff3436] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [df688ec3-f32c-4bdb-8846-fe0eeaff3436] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004531824s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-027087 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-577403 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-027087 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-027087 --alsologtostderr -v=3: (12.401844849s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1018 10:36:06.299995  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m27.923688052s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027087 -n no-preload-027087
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027087 -n no-preload-027087: exit status 7 (221.651907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-027087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (62.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-027087 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.431668176s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-027087 -n no-preload-027087
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (62.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-trfvl" [4735cd3f-7f8f-4c4f-b3db-8a6544223c4e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.009528702s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-trfvl" [4735cd3f-7f8f-4c4f-b3db-8a6544223c4e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004207461s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-027087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-027087 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-881658 "pgrep -a kubelet"
I1018 10:37:33.924600  295193 config.go:182] Loaded profile config "auto-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-881658 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xn9dh" [5c96269a-553a-47e1-88a1-219eea247d51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xn9dh" [5c96269a-553a-47e1-88a1-219eea247d51] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009390723s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m26.769097621s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-881658 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1018 10:38:14.730815  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:38:26.203800  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/addons-006674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:38:35.212718  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m1.824283641s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-km5qw" [97c2181d-7462-4bad-a261-b2d89a26c142] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003826228s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-881658 "pgrep -a kubelet"
I1018 10:39:10.072165  295193 config.go:182] Loaded profile config "kindnet-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-881658 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nb2m2" [6a2701bf-2e96-471e-a7e1-d9f3171d2328] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nb2m2" [6a2701bf-2e96-471e-a7e1-d9f3171d2328] Running
E1018 10:39:16.174360  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003820379s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-gd79c" [bd6be216-308c-43c9-bf7d-3f356d519549] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-gd79c" [bd6be216-308c-43c9-bf7d-3f356d519549] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003715661s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-881658 "pgrep -a kubelet"
I1018 10:39:19.154517  295193 config.go:182] Loaded profile config "calico-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-881658 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-99lzg" [253bbbc1-488e-4dea-bd04-6c6f75edb89e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-99lzg" [253bbbc1-488e-4dea-bd04-6c6f75edb89e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003617734s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-881658 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-881658 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m15.605281721s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1018 10:40:12.063612  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/old-k8s-version-309062/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:38.095774  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:48.122904  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:48.129373  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:48.140853  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:48.162353  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:48.203742  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:48.285330  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:48.446863  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:48.768676  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:49.410156  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:50.691662  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:53.253943  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:40:58.375928  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.098786276s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-881658 "pgrep -a kubelet"
I1018 10:41:00.858584  295193 config.go:182] Loaded profile config "custom-flannel-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-881658 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nt6hw" [72903ea6-fd5b-423d-9286-ee7b79f90f41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 10:41:03.591544  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/functional-679784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-nt6hw" [72903ea6-fd5b-423d-9286-ee7b79f90f41] Running
E1018 10:41:08.618171  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003812208s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-881658 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-881658 "pgrep -a kubelet"
I1018 10:41:13.555341  295193 config.go:182] Loaded profile config "enable-default-cni-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-881658 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d4pvm" [9d012649-31d8-41a0-a194-86f7b2b48e26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d4pvm" [9d012649-31d8-41a0-a194-86f7b2b48e26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005678081s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-881658 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m11.330859407s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (57.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1018 10:42:10.063154  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/no-preload-027087/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:34.257625  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:34.263905  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:34.275226  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:34.297066  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:34.339261  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:34.421275  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:34.583191  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:34.904622  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:35.546043  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:36.827685  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:39.390059  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:44.512028  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-881658 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (57.441738085s)
--- PASS: TestNetworkPlugins/group/bridge/Start (57.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-881658 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-bgq4n" [df586504-2f6c-4182-b53c-e24057683352] Running
I1018 10:42:45.891230  295193 config.go:182] Loaded profile config "bridge-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003092215s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-881658 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sf76f" [6691e172-d3ab-4ff9-8ed9-83c409ee60e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sf76f" [6691e172-d3ab-4ff9-8ed9-83c409ee60e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003874383s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-881658 "pgrep -a kubelet"
I1018 10:42:52.129926  295193 config.go:182] Loaded profile config "flannel-881658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-881658 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dbdv8" [3ea69f15-b4a9-4ca8-a1ae-75fc6fc67020] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 10:42:54.223131  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/default-k8s-diff-port-715182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 10:42:54.753379  295193 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/auto-881658/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-dbdv8" [3ea69f15-b4a9-4ca8-a1ae-75fc6fc67020] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00285507s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-881658 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-881658 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-881658 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.64s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-724083 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-724083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-724083
--- SKIP: TestDownloadOnlyKic (0.64s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-922359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-922359
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-881658 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-881658" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-293333/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 10:27:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-825845
contexts:
- context:
cluster: force-systemd-flag-825845
extensions:
- extension:
last-update: Sat, 18 Oct 2025 10:27:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-825845
name: force-systemd-flag-825845
current-context: force-systemd-flag-825845
kind: Config
preferences: {}
users:
- name: force-systemd-flag-825845
user:
client-certificate: /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/force-systemd-flag-825845/client.crt
client-key: /home/jenkins/minikube-integration/21764-293333/.minikube/profiles/force-systemd-flag-825845/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-881658

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-881658"

                                                
                                                
----------------------- debugLogs end: kubenet-881658 [took: 4.911391485s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-881658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-881658
--- SKIP: TestNetworkPlugins/group/kubenet (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-881658 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-881658" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-881658

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-881658" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-881658"

                                                
                                                
----------------------- debugLogs end: cilium-881658 [took: 5.671223836s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-881658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-881658
--- SKIP: TestNetworkPlugins/group/cilium (5.90s)

                                                
                                    
Copied to clipboard